Sheng Zhang, Xutai Ma, Kevin Duh, Benjamin Van Durme
We unify different broad-coverage semantic parsing tasks under a transduction paradigm, and propose an attention-based neural framework that incrementally builds a meaning representation via a sequence of semantic relations. By leveraging multiple attention mechanisms, the transducer can be effectively trained without relying on a pre-trained aligner. Experiments conducted on three separate broad-coverage semantic parsing tasks -- AMR, SDP and UCCA -- demonstrate that our attention-based neural transducer improves the state of the art on both AMR and UCCA, and is competitive with the state of the art on SDP.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Semantic Parsing | LDC2014T12 | F1 Full | 71.3 | Broad-Coverage Semantic Parsing as Transduction |
| Semantic Parsing | LDC2017T10 | Smatch | 77 | Zhang et al. |
| Semantic Parsing | SemEval 2019 Task 1 | English-Wiki (open) F1 | 76.6 | Neural Transducer |
| AMR Parsing | LDC2014T12 | F1 Full | 71.3 | Broad-Coverage Semantic Parsing as Transduction |
| AMR Parsing | LDC2017T10 | Smatch | 77 | Zhang et al. |