Improved grammatical error correction by ranking elementary edits
Anonymous
Abstract
We offer a rescoring method for grammatical error correction which is based on two-stage procedure: the first stage model extracts local edits and the second classiifies them as correct or false. We show how to use an encoder-decoder or sequence labeling approach as the first stage of our model. We achieve state-of-the-art quality on BEA 2019 English dataset even with a weak BERT-GEC basic model. When using a state-of-the-art GECToR edit generator and the combined scorer, our model beats GECToR on BEA 2019 by $2-3\%$. Our model also beats previous state-of-the-art on Russian, despite using smaller models and less data than the previous approaches.
Related Papers
End-to-End Spoken Grammatical Error Correction2025-06-23IMPARA-GED: Grammatical Error Detection is Boosting Reference-free Grammatical Error Quality Estimator2025-06-03Scaling and Prompting for Improved End-to-End Spoken Grammatical Error Correction2025-05-27gec-metrics: A Unified Library for Grammatical Error Correction Evaluation2025-05-26Exploring the Feasibility of Multilingual Grammatical Error Correction with a Single LLM up to 9B parameters: A Comparative Study of 17 Models2025-05-09Enriching the Korean Learner Corpus with Multi-reference Annotations and Rubric-Based Scoring2025-05-01Deep Learning Model Deployment in Multiple Cloud Providers: an Exploratory Study Using Low Computing Power Environments2025-03-31Enhancing Text Editing for Grammatical Error Correction: Arabic as a Case Study2025-03-02