TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Uni-Encoder: A Fast and Accurate Response Selection Paradi...

Uni-Encoder: A Fast and Accurate Response Selection Paradigm for Generation-Based Dialogue Systems

Chiyu Song, Hongliang He, Haofei Yu, Pengfei Fang, Leyang Cui, Zhenzhong Lan

2021-06-02Conversational Response Selection
PaperPDFCode(official)

Abstract

Sample-and-rank is a key decoding strategy for modern generation-based dialogue systems. It helps achieve diverse and high-quality responses by selecting an answer from a small pool of generated candidates. The current state-of-the-art ranking methods mainly use an encoding paradigm called Cross-Encoder, which separately encodes each context-candidate pair and ranks the candidates according to their fitness scores. However, Cross-Encoder repeatedly encodes the same lengthy context for each candidate, resulting in high computational costs. Poly-Encoder addresses the above problems by reducing the interaction between context and candidates, but with a price of performance drop. In this work, we develop a new paradigm called Uni-Encoder, that keeps the full attention over each pair as in Cross-Encoder while only encoding the context once, as in Poly-Encoder. Uni-Encoder encodes all the candidates with the context in one forward pass. We use the same positional embedding for all candidates to ensure they are treated equally and design a new attention mechanism to avoid confusion. Our Uni-Encoder can simulate other ranking paradigms using different attention and response concatenation methods. Extensive experiments show that our proposed paradigm achieves new state-of-the-art results on four benchmark datasets with high computational efficiency. For instance, it improves R10@1 by 2.9% with an approximately 4X faster inference speed on the Ubuntu V2 dataset.

Results

TaskDatasetMetricValueModel
Conversational Response SelectionDoubanMAP0.648Uni-Enc+BERT-FP
Conversational Response SelectionDoubanMRR0.688Uni-Enc+BERT-FP
Conversational Response SelectionDoubanP@10.518Uni-Enc+BERT-FP
Conversational Response SelectionDoubanR10@10.327Uni-Enc+BERT-FP
Conversational Response SelectionDoubanR10@20.557Uni-Enc+BERT-FP
Conversational Response SelectionDoubanR10@50.865Uni-Enc+BERT-FP
Conversational Response SelectionDoubanMAP0.622Uni-Encoder
Conversational Response SelectionDoubanMRR0.662Uni-Encoder
Conversational Response SelectionDoubanP@10.481Uni-Encoder
Conversational Response SelectionDoubanR10@10.303Uni-Encoder
Conversational Response SelectionDoubanR10@20.514Uni-Encoder
Conversational Response SelectionDoubanR10@50.852Uni-Encoder
Conversational Response SelectionPersona-ChatMRR0.922Uni-Encoder
Conversational Response SelectionPersona-ChatR20@10.869Uni-Encoder
Conversational Response SelectionUbuntu Dialogue (v2, Ranking)R10@10.859Uni-Encoder
Conversational Response SelectionUbuntu Dialogue (v2, Ranking)R10@20.938Uni-Encoder
Conversational Response SelectionUbuntu Dialogue (v2, Ranking)R10@50.99Uni-Encoder
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@10.916Uni-Enc+BERT-FP
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@20.965Uni-Enc+BERT-FP
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@50.994Uni-Enc+BERT-FP
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@10.886Uni-Encoder
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@20.946Uni-Encoder
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@50.989Uni-Encoder

Related Papers

Efficient Dynamic Hard Negative Sampling for Dialogue Selection2024-08-16P5: Plug-and-Play Persona Prompting for Personalized Response Selection2023-10-10Knowledge-aware response selection with semantics underlying multi-turn open-domain conversations2023-07-27Dial-MAE: ConTextual Masked Auto-Encoder for Retrieval-based Dialogue Systems2023-06-07Learning Dialogue Representations from Consecutive Utterances2022-05-26One Agent To Rule Them All: Towards Multi-agent Conversational AI2022-03-15Two-Level Supervised Contrastive Learning for Response Selection in Multi-Turn Dialogue2022-03-01Small Changes Make Big Differences: Improving Multi-turn Response Selection in Dialogue Systems via Fine-Grained Contrastive Learning2021-11-19