Metric: BLEU (higher is better)
| # | Model↕ | BLEU▼ | Extra Data | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | T5B Baseline | 48.74 | No | - | - | Code |
| 2 | T5B Baseline | 48.47 | No | - | - | Code |
| 3 | T5B Baseline | 48.47 | No | - | - | Code |
| 4 | FactT5B | 48.37 | No | - | - | Code |
| 5 | FactT5B | 48.37 | No | - | - | Code |
| 6 | FactT5B | 48.37 | No | - | - | Code |
| 7 | self-mem + new data | 47.76 | No | Self-training from Self-memory in Data-to-text G... | 2024-01-19 | Code |
| 8 | JointGT Baseline | 47.51 | No | - | - | Code |
| 9 | JointGT Baseline | 47.51 | No | - | - | Code |
| 10 | JointGT Baseline | 47.51 | No | - | - | Code |
| 11 | FactJointGT | 47.39 | No | - | - | Code |
| 12 | FactJointGT | 47.39 | No | - | - | Code |
| 13 | FactJointGT | 47.39 | No | - | - | Code |
| 14 | HTLM (fine-tuning) | 47.2 | No | HTLM: Hyper-Text Pre-Training and Prompting of L... | 2021-07-14 | - |
| 15 | GPT-2-Large (fine-tuning) | 47 | No | HTLM: Hyper-Text Pre-Training and Prompting of L... | 2021-07-14 | - |