Yuval Kirstain, Ori Ram, Omer Levy
The introduction of pretrained language models has reduced many complex task-specific NLP models to simple lightweight layers. An exception to this trend is coreference resolution, where a sophisticated task-specific model is appended to a pretrained transformer encoder. While highly effective, the model has a very large memory footprint -- primarily due to dynamically-constructed span and span-pair representations -- which hinders the processing of complete documents and the ability to train on multiple instances in a single batch. We introduce a lightweight end-to-end coreference model that removes the dependency on span representations, handcrafted features, and heuristics. Our model performs competitively with the current standard model, while being simpler and more efficient.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Coreference Resolution | CoNLL 2012 | Avg F1 | 80.3 | s2e + Longformer-Large |
| Coreference Resolution | CoNLL 2012 | Avg F1 | 80.2 | c2f + SpanBERT-Large |