TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning to Rank Question-Answer Pairs using Hierarchical ...

Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering

Seunghyun Yoon, Joongbo Shin, Kyomin Jung

2017-10-10NAACL 2018 6Question AnsweringLearning-To-RankClusteringAnswer Selection
PaperPDFCodeCodeCode

Abstract

In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module. With our proposed model, a text is encoded to a vector representation from an word-level to a chunk-level to effectively capture the entire meaning. In particular, by adapting the hierarchical structure, our model shows very small performance degradations in longer text comprehension while other state-of-the-art recurrent neural network models suffer from it. Additionally, the latent topic clustering module extracts semantic information from target samples. This clustering module is useful for any text related tasks by allowing each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data. We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products. The proposed model shows state-of-the-art results for ranking question-answer pairs.

Results

TaskDatasetMetricValueModel
Question AnsweringUbuntu Dialogue (v1, Ranking)1 in 10 R@10.684HRDE-LTC
Question AnsweringUbuntu Dialogue (v1, Ranking)1 in 10 R@20.822HRDE-LTC
Question AnsweringUbuntu Dialogue (v1, Ranking)1 in 10 R@50.96HRDE-LTC
Question AnsweringUbuntu Dialogue (v1, Ranking)1 in 2 R@10.916HRDE-LTC
Question AnsweringUbuntu Dialogue (v2, Ranking)1 in 10 R@10.652HRDE-LTC
Question AnsweringUbuntu Dialogue (v2, Ranking)1 in 10 R@20.815HRDE-LTC
Question AnsweringUbuntu Dialogue (v2, Ranking)1 in 10 R@50.966HRDE-LTC
Question AnsweringUbuntu Dialogue (v2, Ranking)1 in 2 R@10.915HRDE-LTC

Related Papers

Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Ranking Vectors Clustering: Theory and Applications2025-07-16