Improving sentence compression by learning to predict gaze
Sigrid Klerke, Yoav Goldberg, Anders Søgaard
Abstract
We show how eye-tracking corpora can be used to improve sentence compression models, presenting a novel multi-task learning algorithm based on multi-layer LSTMs. We obtain performance competitive with or better than state-of-the-art approaches.
Results
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Sentence Embeddings | Google Dataset | F1 | 0.81 | LSTMs + eye-movement |
| Text Summarization | Google Dataset | F1 | 0.81 | LSTMs + eye-movement |
| Representation Learning | Google Dataset | F1 | 0.81 | LSTMs + eye-movement |
| Sentence Compression | Google Dataset | F1 | 0.81 | LSTMs + eye-movement |
Related Papers
SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Robust-Multi-Task Gradient Boosting2025-07-15SAMO: A Lightweight Sharpness-Aware Approach for Multi-Task Optimization with Joint Global-Local Perturbation2025-07-10Opportunistic Osteoporosis Diagnosis via Texture-Preserving Self-Supervision, Mixture of Experts and Multi-Task Integration2025-06-25AnchorDP3: 3D Affordance Guided Sparse Diffusion Policy for Robotic Manipulation2025-06-24An Audio-centric Multi-task Learning Framework for Streaming Ads Targeting on Spotify2025-06-23SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning2025-06-18Leader360V: The Large-scale, Real-world 360 Video Dataset for Multi-task Learning in Diverse Environment2025-06-17