Mihir Kale, Abhinav Rastogi
We study the pre-train + fine-tune strategy for data-to-text tasks. Our experiments indicate that text-to-text pre-training in the form of T5, enables simple, end-to-end transformer based models to outperform pipelined neural architectures tailored for data-to-text generation, as well as alternative language model based pre-training techniques such as BERT and GPT-2. Importantly, T5 pre-training leads to better generalization, as evidenced by large improvements on out-of-domain test sets. We hope our work serves as a useful baseline for future research, as transfer learning becomes ever more prevalent for data-to-text tasks.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Text Generation | WebNLG | BLEU | 64.7 | T5-Base |
| Text Generation | MULTIWOZ 2.1 | BLEU | 35.1 | T5-Base |
| Text Generation | WebNLG Full | BLEU | 57.1 | T5-Large |
| Text Generation | ToTTo | BLEU | 49.5 | T5-3B |
| Text Generation | ToTTo | PARENT | 58.4 | T5-3B |
| Data-to-Text Generation | WebNLG | BLEU | 64.7 | T5-Base |
| Data-to-Text Generation | MULTIWOZ 2.1 | BLEU | 35.1 | T5-Base |
| Data-to-Text Generation | WebNLG Full | BLEU | 57.1 | T5-Large |
| Data-to-Text Generation | ToTTo | BLEU | 49.5 | T5-3B |
| Data-to-Text Generation | ToTTo | PARENT | 58.4 | T5-3B |