Metric: F0.5 (higher is better)
| # | Model↕ | F0.5▼ | Extra Data | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | Llama + 1M BT + gold | 74.09 | Yes | To Err Is Human, but Llamas Can Learn It Too | 2024-03-08 | Code |
| 2 | mBART-based model with synthetic data | 68.17 | Yes | - | - | Code |
| 3 | mT5 large + 10M synth | 68.09 | No | - | - | Code |
| 4 | RedPenNet | 67.71 | No | RedPenNet for Grammatical Error Correction: Outp... | 2023-09-19 | Code |
| 5 | ChatGPT (zero-shot) | 27.4 | No | GPT-3.5 for Grammatical Error Correction | 2024-05-14 | - |