Structured Training for Neural Network Transition-Based Parsing
David Weiss, Chris Alberti, Michael Collins, Slav Petrov
Abstract
We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge is the best accuracy on Stanford Dependencies to date. We also provide in-depth ablative analysis to determine which aspects of our model provide the largest gains in accuracy.
Results
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Dependency Parsing | Penn Treebank | LAS | 92.06 | Weiss et al. |
| Dependency Parsing | Penn Treebank | POS | 97.3 | Weiss et al. |
| Dependency Parsing | Penn Treebank | UAS | 94.01 | Weiss et al. |
Related Papers
Step-by-step Instructions and a Simple Tabular Output Format Improve the Dependency Parsing Accuracy of LLMs2025-06-11UD-KSL Treebank v1.3: A semi-automated framework for aligning XPOS-extracted units with UPOS tags2025-06-10LKD-KGC: Domain-Specific KG Construction via LLM-driven Knowledge Dependency Parsing2025-05-30Dependency Parsing is More Parameter-Efficient with Normalization2025-05-26FiLLM -- A Filipino-optimized Large Language Model based on Southeast Asia Large Language Model (SEALLM)2025-05-25CrosGrpsABS: Cross-Attention over Syntactic and Semantic Graphs for Aspect-Based Sentiment Analysis in a Low-Resource Language2025-05-25Semantic-based Unsupervised Framing Analysis (SUFA): A Novel Approach for Computational Framing Analysis2025-05-21Hierarchical Bracketing Encodings for Dependency Parsing as Tagging2025-05-16