Structured Training for Neural Network Transition-Based Parsing

David Weiss, Chris Alberti, Michael Collins, Slav Petrov

2015-06-19IJCNLP 2015 7Dependency Parsing

Abstract

We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge is the best accuracy on Stanford Dependencies to date. We also provide in-depth ablative analysis to determine which aspects of our model provide the largest gains in accuracy.

Results

TaskDatasetMetricValueModel
Dependency ParsingPenn TreebankLAS92.06Weiss et al.
Dependency ParsingPenn TreebankPOS97.3Weiss et al.
Dependency ParsingPenn TreebankUAS94.01Weiss et al.

Related Papers