TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-bas...

Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-based Dialogue Systems

Zhenpeng Su, Xing Wu, Wei Zhou, Guangyuan Ma, Songlin Hu

2023-06-07Masked Language ModelingConversational Response SelectionRetrievalLanguage Modelling
PaperPDFCode(official)

Abstract

Dialogue response selection aims to select an appropriate response from several candidates based on a given user and system utterance history. Most existing works primarily focus on post-training and fine-tuning tailored for cross-encoders. However, there are no post-training methods tailored for dense encoders in dialogue response selection. We argue that when the current language model, based on dense dialogue systems (such as BERT), is employed as a dense encoder, it separately encodes dialogue context and response, leading to a struggle to achieve the alignment of both representations. Thus, we propose Dial-MAE (Dialogue Contextual Masking Auto-Encoder), a straightforward yet effective post-training technique tailored for dense encoders in dialogue response selection. Dial-MAE uses an asymmetric encoder-decoder architecture to compress the dialogue semantics into dense vectors, which achieves better alignment between the features of the dialogue context and response. Our experiments have demonstrated that Dial-MAE is highly effective, achieving state-of-the-art performance on two commonly evaluated benchmarks.

Results

TaskDatasetMetricValueModel
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@10.918Dial-MAE
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@20.964Dial-MAE
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@50.993Dial-MAE
Conversational Response SelectionE-commerceR10@10.93DialMAE
Conversational Response SelectionE-commerceR10@20.977DialMAE
Conversational Response SelectionE-commerceR10@50.997DialMAE

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17