TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MDD-Eval: Self-Training on Augmented Data for Multi-Domain...

MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation

Chen Zhang, Luis Fernando D'Haro, Thomas Friedrichs, Haizhou Li

2021-12-14Dialogue Evaluation
PaperPDFCode(official)

Abstract

Chatbots are designed to carry out human-like conversations across different domains, such as general chit-chat, knowledge exchange, and persona-grounded conversations. To measure the quality of such conversational agents, a dialogue evaluator is expected to conduct assessment across domains as well. However, most of the state-of-the-art automatic dialogue evaluation metrics (ADMs) are not designed for multi-domain evaluation. We are motivated to design a general and robust framework, MDD-Eval, to address the problem. Specifically, we first train a teacher evaluator with human-annotated data to acquire a rating skill to tell good dialogue responses from bad ones in a particular domain and then, adopt a self-training strategy to train a new evaluator with teacher-annotated multi-domain data, that helps the new evaluator to generalize across multiple domains. MDD-Eval is extensively assessed on six dialogue evaluation benchmarks. Empirical results show that the MDD-Eval framework achieves a strong performance with an absolute improvement of 7% over the state-of-the-art ADMs in terms of mean Spearman correlation scores across all the evaluation benchmarks.

Results

TaskDatasetMetricValueModel
Open-Domain DialogUSR-TopicalChatPearson Correlation0.4575MDD-Eval
Open-Domain DialogUSR-TopicalChatSpearman Correlation0.5109MDD-Eval
Dialogue EvaluationUSR-TopicalChatPearson Correlation0.4575MDD-Eval
Dialogue EvaluationUSR-TopicalChatSpearman Correlation0.5109MDD-Eval

Related Papers

DRE: An Effective Dual-Refined Method for Integrating Small and Large Language Models in Open-Domain Dialogue Evaluation2025-06-04MEDAL: A Framework for Benchmarking LLMs as Multilingual Open-Domain Chatbots and Dialogue Evaluators2025-05-28MARS-Bench: A Multi-turn Athletic Real-world Scenario Benchmark for Dialogue Evaluation2025-05-27LeCoDe: A Benchmark Dataset for Interactive Legal Consultation Dialogue Evaluation2025-05-26Methods for Recognizing Nested Terms2025-04-22RuOpinionNE-2024: Extraction of Opinion Tuples from Russian News Texts2025-04-09Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language Models2025-04-07BoK: Introducing Bag-of-Keywords Loss for Interpretable Dialogue Response Generation2025-01-17