TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Are You Talking to Me? Reasoned Visual Dialog Generation t...

Are You Talking to Me? Reasoned Visual Dialog Generation through Adversarial Learning

Qi Wu, Peng Wang, Chunhua Shen, Ian Reid, Anton Van Den Hengel

2017-11-21CVPR 2018 6Question AnsweringVisual DialogReinforcement LearningVisual Question Answering (VQA)Visual Question Answering
PaperPDF

Abstract

The Visual Dialogue task requires an agent to engage in a conversation about an image with a human. It represents an extension of the Visual Question Answering task in that the agent needs to answer a question about an image, but it needs to do so in light of the previous dialogue that has taken place. The key challenge in Visual Dialogue is thus maintaining a consistent, and natural dialogue while continuing to answer questions correctly. We present a novel approach that combines Reinforcement Learning and Generative Adversarial Networks (GANs) to generate more human-like responses to questions. The GAN helps overcome the relative paucity of training data, and the tendency of the typical MLE-based approach to generate overly terse answers. Critically, the GAN is tightly integrated into the attention mechanism that generates human-interpretable reasons for each answer. This means that the discriminative model of the GAN has the task of assessing whether a candidate answer is generated by a human or not, given the provided reason. This is significant because it drives the generative model to produce high quality answers that are well supported by the associated reasoning. The method also generates the state-of-the-art results on the primary benchmark.

Results

TaskDatasetMetricValueModel
DialogueVisDial v0.9 valMRR63.98CoAtt
DialogueVisDial v0.9 valMean Rank4.47CoAtt
DialogueVisDial v0.9 valR@150.29CoAtt
DialogueVisDial v0.9 valR@1088.81CoAtt
DialogueVisDial v0.9 valR@580.71CoAtt
Visual DialogVisDial v0.9 valMRR63.98CoAtt
Visual DialogVisDial v0.9 valMean Rank4.47CoAtt
Visual DialogVisDial v0.9 valR@150.29CoAtt
Visual DialogVisDial v0.9 valR@1088.81CoAtt
Visual DialogVisDial v0.9 valR@580.71CoAtt

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17