Metric: METEOR (higher is better)
| # | Model↕ | METEOR▼ | Extra Data | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | T5B Baseline | 40.74 | No | - | - | Code |
| 2 | FactT5B | 40.72 | No | - | - | Code |
| 3 | JointGT Baseline | 40.43 | No | - | - | Code |
| 4 | FactJointGT | 40.32 | No | - | - | Code |
| 5 | Control Prefixes (T5-large) | 0.411 | No | Control Prefixes for Parameter-Efficient Text Ge... | 2021-10-15 | Code |
| 6 | T5B Baseline | 0.4074 | No | - | - | Code |
| 7 | T5B Baseline | 0.4074 | No | - | - | Code |
| 8 | FactT5B | 0.4072 | No | - | - | Code |
| 9 | FactT5B | 0.4072 | No | - | - | Code |
| 10 | JointGT Baseline | 0.4043 | No | - | - | Code |
| 11 | JointGT Baseline | 0.4043 | No | - | - | Code |
| 12 | FactJointGT | 0.4032 | No | - | - | Code |
| 13 | FactJointGT | 0.4032 | No | - | - | Code |
| 14 | HTLM (fine-tuning) | 0.39 | No | HTLM: Hyper-Text Pre-Training and Prompting of L... | 2021-07-14 | - |
| 15 | GPT-2-Large (fine-tuning) | 0.39 | No | HTLM: Hyper-Text Pre-Training and Prompting of L... | 2021-07-14 | - |
| 16 | T5 | 0.115 | No | The GEM Benchmark: Natural Language Generation, ... | 2021-02-02 | - |
| 17 | BART | 0.107 | No | The GEM Benchmark: Natural Language Generation, ... | 2021-02-02 | - |