Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu
In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots. A new model, named Speaker-Aware BERT (SA-BERT), is proposed in order to make the model aware of the speaker change information, which is an important and intrinsic property of multi-turn dialogues. Furthermore, a speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues. This strategy selects a small number of most important utterances as the filtered context according to the speakers' information in them. Finally, domain adaptation is performed to incorporate the in-domain knowledge into pre-trained language models. Experiments on five public datasets show that our proposed model outperforms the present models on all metrics by large margins and achieves new state-of-the-art performances for multi-turn response selection.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Conversational Response Selection | Ubuntu IRC | Accuracy | 60.42 | SA-BERT |
| Conversational Response Selection | Douban | MAP | 0.619 | SA-BERT |
| Conversational Response Selection | Douban | MRR | 0.659 | SA-BERT |
| Conversational Response Selection | Douban | P@1 | 0.496 | SA-BERT |
| Conversational Response Selection | Douban | R10@1 | 0.313 | SA-BERT |
| Conversational Response Selection | Douban | R10@2 | 0.481 | SA-BERT |
| Conversational Response Selection | Douban | R10@5 | 0.847 | SA-BERT |
| Conversational Response Selection | RRS | MAP | 0.701 | SA-BERT+BERT-FP |
| Conversational Response Selection | RRS | MRR | 0.715 | SA-BERT+BERT-FP |
| Conversational Response Selection | RRS | P@1 | 0.555 | SA-BERT+BERT-FP |
| Conversational Response Selection | RRS | R10@1 | 0.497 | SA-BERT+BERT-FP |
| Conversational Response Selection | RRS | R10@2 | 0.685 | SA-BERT+BERT-FP |
| Conversational Response Selection | RRS | R10@5 | 0.931 | SA-BERT+BERT-FP |
| Conversational Response Selection | RRS Ranking Test | NDCG@3 | 0.674 | SA-BERT+BERT-FP |
| Conversational Response Selection | RRS Ranking Test | NDCG@5 | 0.753 | SA-BERT+BERT-FP |
| Conversational Response Selection | Ubuntu Dialogue (v1, Ranking) | R10@1 | 0.855 | SA-BERT |
| Conversational Response Selection | Ubuntu Dialogue (v1, Ranking) | R10@2 | 0.928 | SA-BERT |
| Conversational Response Selection | Ubuntu Dialogue (v1, Ranking) | R10@5 | 0.983 | SA-BERT |
| Conversational Response Selection | Ubuntu Dialogue (v1, Ranking) | R2@1 | 0.965 | SA-BERT |
| Conversational Response Selection | E-commerce | R10@1 | 0.704 | SA-BERT |
| Conversational Response Selection | E-commerce | R10@2 | 0.879 | SA-BERT |
| Conversational Response Selection | E-commerce | R10@5 | 0.985 | SA-BERT |