Wenhan Xiong, Anchit Gupta, Shubham Toshniwal, Yashar Mehdad, Wen-tau Yih
We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline -- model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying length. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes. Our code has been released at https://github.com/facebookresearch/bart_ls.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Language Modelling | SCROLLS | Avg. | 39.76 | BART-LS |
| Language Modelling | SCROLLS | CNLI | 87.1 | BART-LS |
| Language Modelling | SCROLLS | Nrtv | 26.2 | BART-LS |
| Language Modelling | SCROLLS | Qspr | 48.7 | BART-LS |
| Text Summarization | GovReport | ROUGE-1 | 62 | BART-LS |
| Text Summarization | Arxiv HEP-TH citation graph | ROUGE-1 | 50.2 | BART-LS |
| Text Summarization | Pubmed | ROUGE-1 | 50.3 | BART-LS |
| Text Summarization | QMSum | ROUGE-1 | 37.9 | BART-LS |
| Text Summarization | BookSum | ROUGE | 38.5 | BART-LS |