Abstract
We present a memory-based model for context-dependent semantic parsing. Previous approaches focus on enabling the decoder to copy or modify the parse from the previous utterance, assuming there is a dependency between the current and previous parses. In this work, we propose to represent contextual information using an external memory. We learn a context memory controller that manages the memory by maintaining the cumulative meaning of sequential user utterances. We evaluate our approach on three semantic parsing benchmarks. Experimental results show that our model can better process context-dependent information and demonstrates improved performance without using task-specific decoders.
Results
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Semantic Parsing | SParC | Exact | 40.3 | MeMCE |
Related Papers
Where, What, Why: Towards Explainable Driver Attention Prediction2025-06-29Beyond Chains: Bridging Large Language Models and Knowledge Bases in Complex Question Answering2025-05-20Creativity or Brute Force? Using Brainteasers as a Window into the Problem-Solving Abilities of Large Language Models2025-05-16Sigma: A dataset for text-to-code semantic parsing with statistical analysis2025-04-05Diverse In-Context Example Selection After Decomposing Programs and Aligned Utterances Improves Semantic Parsing2025-04-04ZOGRASCOPE: A New Benchmark for Property Graphs2025-03-07Geo-Semantic-Parsing: AI-powered geoparsing by traversing semantic knowledge graphs2025-03-03Disambiguate First Parse Later: Generating Interpretations for Ambiguity Resolution in Semantic Parsing2025-02-25