TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/An Effective Domain Adaptive Post-Training Method for BERT...

An Effective Domain Adaptive Post-Training Method for BERT in Response Selection

Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, Heuiseok Lim

2019-08-13Conversational Response SelectionRetrievalLanguage Modelling
PaperPDFCode

Abstract

We focus on multi-turn response selection in a retrieval-based dialog system. In this paper, we utilize the powerful pre-trained language model Bi-directional Encoder Representations from Transformer (BERT) for a multi-turn dialog system and propose a highly effective post-training method on domain-specific corpus. Although BERT is easily adopted to various NLP tasks and outperforms previous baselines of each task, it still has limitations if a task corpus is too focused on a certain domain. Post-training on domain-specific corpus (e.g., Ubuntu Corpus) helps the model to train contextualized representations and words that do not appear in general corpus (e.g., English Wikipedia). Experimental results show that our approach achieves new state-of-the-art on two response selection benchmarks (i.e., Ubuntu Corpus V1, Advising Corpus) performance improvement by 5.9% and 6% on R@1.

Results

TaskDatasetMetricValueModel
Conversational Response SelectionDoubanMAP0.591BERT
Conversational Response SelectionDoubanMRR0.633BERT
Conversational Response SelectionDoubanP@10.454BERT
Conversational Response SelectionDoubanR10@10.28BERT
Conversational Response SelectionDoubanR10@20.47BERT
Conversational Response SelectionDoubanR10@50.828BERT
Conversational Response SelectionRRSMAP0.625BERT
Conversational Response SelectionRRSMRR0.639BERT
Conversational Response SelectionRRSP@10.453BERT
Conversational Response SelectionRRSR10@10.404BERT
Conversational Response SelectionRRSR10@20.606BERT
Conversational Response SelectionRRSR10@50.875BERT
Conversational Response SelectionRRS Ranking TestNDCG@30.625BERT
Conversational Response SelectionRRS Ranking TestNDCG@50.714BERT
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@10.855BERT-VFT
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@20.928BERT-VFT
Conversational Response SelectionUbuntu Dialogue (v1, Ranking)R10@50.985BERT-VFT

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17