Qihuang Zhong, Liang Ding, Yibing Zhan, Yu Qiao, Yonggang Wen, Li Shen, Juhua Liu, Baosheng Yu, Bo Du, Yixin Chen, Xinbo Gao, Chunyan Miao, Xiaoou Tang, DaCheng Tao
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard. SuperGLUE is more challenging than the widely used general language understanding evaluation (GLUE) benchmark, containing eight difficult language understanding tasks, including question answering, natural language inference, word sense disambiguation, coreference resolution, and reasoning. [Method] Instead of arbitrarily increasing the size of a pretrained language model (PLM), our aim is to 1) fully extract knowledge from the input pretraining data given a certain parameter budget, e.g., 6B, and 2) effectively transfer this knowledge to downstream tasks. To achieve goal 1), we propose self-evolution learning for PLMs to wisely predict the informative tokens that should be masked, and supervise the masked language modeling (MLM) process with rectified smooth labels. For goal 2), we leverage the prompt transfer technique to improve the low-resource tasks by transferring the knowledge from the foundation model and related downstream tasks to the target task. [Results] According to our submission record (Oct. 2022), with our optimized pretraining and fine-tuning strategies, our 6B Vega method achieved new state-of-the-art performance on 4/8 tasks, sitting atop the SuperGLUE leaderboard on Oct. 8, 2022, with an average score of 91.3.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | COPA | Accuracy | 99.4 | Vega v2 6B (KD-based prompt transfer) |
| Question Answering | COPA | Accuracy | 98.2 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Question Answering | MultiRC | EM | 63 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Question Answering | MultiRC | F1 | 88.4 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Question Answering | MultiRC | EM | 62.4 | Vega v2 6B (fine-tuned) |
| Question Answering | MultiRC | F1 | 88.2 | Vega v2 6B (fine-tuned) |
| Question Answering | BoolQ | Accuracy | 92 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Question Answering | BoolQ | Accuracy | 90.5 | Vega v2 6B (fine-tuned) |
| Common Sense Reasoning | ReCoRD | EM | 95.9 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Common Sense Reasoning | ReCoRD | F1 | 96.4 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Common Sense Reasoning | ReCoRD | EM | 93.9 | Vega v2 6B (fine-tuned) |
| Common Sense Reasoning | ReCoRD | F1 | 94.4 | Vega v2 6B (fine-tuned) |
| Word Sense Disambiguation | Words in Context | Accuracy | 77.4 | Vega v2 6B (fine-tuned) |
| Word Sense Disambiguation | Words in Context | Accuracy | 77.1 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Natural Language Inference | WNLI | Accuracy | 95.9 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Natural Language Inference | CommitmentBank | Accuracy | 99.2 | Vega v2 6B (KD-based prompt transfer) |
| Natural Language Inference | CommitmentBank | F1 | 98.6 | Vega v2 6B (KD-based prompt transfer) |
| Natural Language Inference | CommitmentBank | Accuracy | 97.6 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Natural Language Inference | CommitmentBank | F1 | 95.9 | Turing NLR v5 XXL 5.4B (fine-tuned) |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 98.6 | Vega v2 6B (KD-based prompt transfer) |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 97.3 | Turing NLR v5 XXL 5.4B (fine-tuned) |