Unsupervised Dependency Parsing: Let's Use Supervised Parsers
Phong Le, Willem Zuidema
Abstract
We present a self-training approach to unsupervised dependency parsing that reuses existing supervised and unsupervised parsing algorithms. Our approach, called `iterated reranking' (IR), starts with dependency trees generated by an unsupervised parser, and iteratively improves these trees using the richer probability models used in supervised parsing that are in turn trained on these trees. Our system achieves 1.8% accuracy higher than the state-of-the-part parser of Spitkovsky et al. (2013) on the WSJ corpus.
Results
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Dependency Parsing | Penn Treebank | UAS | 66.2 | Iterative reranking |
Related Papers
SARA: Selective and Adaptive Retrieval-augmented Generation with Context Compression2025-07-08Efficiency-Effectiveness Reranking FLOPs for LLM-based Rerankers2025-07-08Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval2025-06-28JointRank: Rank Large Set with Single Pass2025-06-27Leveraging LLM-Assisted Query Understanding for Live Retrieval-Augmented Generation2025-06-26AI Assistants to Enhance and Exploit the PETSc Knowledge Base2025-06-25SACL: Understanding and Combating Textual Bias in Code Retrieval with Semantic-Augmented Reranking and Localization2025-06-25Knowledge-Aware Diverse Reranking for Cross-Source Question Answering2025-06-25