TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Scalable Neural Dialogue State Tracking

Scalable Neural Dialogue State Tracking

Vevake Balaraman, Bernardo Magnini

2019-10-22Dialogue State Tracking
PaperPDFCode(official)

Abstract

A Dialogue State Tracker (DST) is a key component in a dialogue system aiming at estimating the beliefs of possible user goals at each dialogue turn. Most of the current DST trackers make use of recurrent neural networks and are based on complex architectures that manage several aspects of a dialogue, including the user utterance, the system actions, and the slot-value pairs defined in a domain ontology. However, the complexity of such neural architectures incurs into a considerable latency in the dialogue state prediction, which limits the deployments of the models in real-world applications, particularly when task scalability (i.e. amount of slots) is a crucial factor. In this paper, we propose an innovative neural model for dialogue state tracking, named Global encoder and Slot-Attentive decoders (G-SAT), which can predict the dialogue state with a very low latency time, while maintaining high-level performance. We report experiments on three different languages (English, Italian, and German) of the WoZ2.0 dataset, and show that the proposed approach provides competitive advantages over state-of-art DST systems, both in terms of accuracy and in terms of time complexity for predictions, being over 15 times faster than the other systems.

Results

TaskDatasetMetricValueModel
DialogueWizard-of-OzJoint88.7G-SAT
DialogueWizard-of-OzRequest96.9G-SAT

Related Papers

Beyond Single-User Dialogue: Assessing Multi-User Dialogue State Tracking Capabilities of Large Language Models2025-06-12Factors affecting the in-context learning abilities of LLMs for dialogue state tracking2025-06-10Approaching Dialogue State Tracking via Aligning Speech Encoders and LLMs2025-06-10Interpretable and Robust Dialogue State Tracking via Natural Language Summarization with LLMs2025-03-11Learning LLM Preference over Intra-Dialogue Pairs: A Framework for Utterance-level Understandings2025-03-07Enhancing LLM Reliability via Explicit Knowledge Boundary Modeling2025-03-04Know Your Mistakes: Towards Preventing Overreliance on Task-Oriented Conversational AI Through Accountability Modeling2025-01-17Intent-driven In-context Learning for Few-shot Dialogue State Tracking2024-12-04