TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Unified Pre-training Framework for Conversational AI

A Unified Pre-training Framework for Conversational AI

Siqi Bao, Bingjin Chen, Huang He, Xin Tian, Han Zhou, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Yingzhan Lin

2021-05-06ChatbotResponse Generation
PaperPDFCode(official)

Abstract

In this work, we explore the application of PLATO-2 on various dialogue systems, including open-domain conversation, knowledge grounded dialogue, and task-oriented conversation. PLATO-2 is initially designed as an open-domain chatbot, trained via two-stage curriculum learning. In the first stage, a coarse-grained response generation model is learned to fit the simplified one-to-one mapping relationship. This model is applied to the task-oriented conversation, given that the semantic mappings tend to be deterministic in task completion. In the second stage, another fine-grained generation model and an evaluation model are further learned for diverse response generation and coherence estimation, respectively. With superior capability on capturing one-to-many mapping, such models are suitable for the open-domain conversation and knowledge grounded dialogue. For the comprehensive evaluation of PLATO-2, we have participated in multiple tasks of DSTC9, including interactive evaluation of open-domain conversation (Track3-task2), static evaluation of knowledge grounded dialogue (Track3-task1), and end-to-end task-oriented conversation (Track2-task1). PLATO-2 has obtained the 1st place in all three tasks, verifying its effectiveness as a unified framework for various dialogue systems.

Results

TaskDatasetMetricValueModel
DialogueDSTC9 Track 3 - Task 2Coherent2.8017PLATO-2
DialogueDSTC9 Track 3 - Task 2Consistent0.939PLATO-2
DialogueDSTC9 Track 3 - Task 2Diversity2.7441PLATO-2
DialogueDSTC9 Track 3 - Task 2Error Recovery2.7518PLATO-2
DialogueDSTC9 Track 3 - Task 2Flexible2.8PLATO-2
DialogueDSTC9 Track 3 - Task 2Informative2.7881PLATO-2
DialogueDSTC9 Track 3 - Task 2Inquisitive2.7949PLATO-2
DialogueDSTC9 Track 3 - Task 2Likeable2.7878PLATO-2
DialogueDSTC9 Track 3 - Task 2Overall Human Rating4.15PLATO-2
DialogueDSTC9 Track 3 - Task 2Topic Depth2.7678PLATO-2
DialogueDSTC9 Track 3 - Task 2Understanding2.8285PLATO-2

Related Papers

TuneShield: Mitigating Toxicity in Conversational AI while Fine-tuning on Untrusted Data2025-07-08Disambiguation-Centric Finetuning Makes Enterprise Tool-Calling LLMs More Realistic and Less Risky2025-07-04Generalized Adaptive Transfer Network: Enhancing Transfer Learning in Reinforcement Learning Across Domains2025-07-02Knowledge Augmented Finetuning Matters in both RAG and Agent Based Dialog Systems2025-06-28Exploring the Effects of Chatbot Anthropomorphism and Human Empathy on Human Prosocial Behavior Toward Chatbots2025-06-25SAFEx: Analyzing Vulnerabilities of MoE-Based LLMs via Stable Safety-critical Expert Identification2025-06-20Mapping Caregiver Needs to AI Chatbot Design: Strengths and Gaps in Mental Health Support for Alzheimer's and Dementia Caregivers2025-06-18From What to Respond to When to Respond: Timely Response Generation for Open-domain Dialogue Agents2025-06-17