Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou
This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of optimizing one-step-ahead prediction in the traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction that predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large-scale dataset (160GB), respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training corpus.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Text Summarization | GigaWord | ROUGE-1 | 39.51 | ProphetNet |
| Text Summarization | GigaWord | ROUGE-2 | 20.42 | ProphetNet |
| Text Summarization | GigaWord | ROUGE-L | 36.69 | ProphetNet |
| Text Summarization | CNN / Daily Mail | ROUGE-1 | 44.2 | ProphetNet |
| Text Summarization | CNN / Daily Mail | ROUGE-2 | 21.17 | ProphetNet |
| Text Summarization | CNN / Daily Mail | ROUGE-L | 41.3 | ProphetNet |
| Abstractive Text Summarization | CNN / Daily Mail | ROUGE-1 | 44.2 | ProphetNet |
| Abstractive Text Summarization | CNN / Daily Mail | ROUGE-2 | 21.17 | ProphetNet |
| Abstractive Text Summarization | CNN / Daily Mail | ROUGE-L | 41.3 | ProphetNet |
| Question Generation | SQuAD1.1 | BLEU-4 | 23.91 | ProphetNet |
| Question Generation | SQuAD1.1 | METEOR | 26.6 | ProphetNet |
| Question Generation | SQuAD1.1 | ROUGE-L | 52.3 | ProphetNet |