TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Making History Matter: History-Advantage Sequence Training...

Making History Matter: History-Advantage Sequence Training for Visual Dialog

Tianhao Yang, Zheng-Jun Zha, Hanwang Zhang

2019-02-25ICCV 2019 10Visual DialogReinforcement LearningVisual ReasoningAnswer GenerationResponse Generation
PaperPDF

Abstract

We study the multi-round response generation in visual dialog, where a response is generated according to a visually grounded conversational history. Given a triplet: an image, Q&A history, and current question, all the prevailing methods follow a codec (i.e., encoder-decoder) fashion in a supervised learning paradigm: a multimodal encoder encodes the triplet into a feature vector, which is then fed into the decoder for the current answer generation, supervised by the ground-truth. However, this conventional supervised learning does NOT take into account the impact of imperfect history, violating the conversational nature of visual dialog and thus making the codec more inclined to learn history bias but not contextual reasoning. To this end, inspired by the actor-critic policy gradient in reinforcement learning, we propose a novel training paradigm called History Advantage Sequence Training (HAST). Specifically, we intentionally impose wrong answers in the history, obtaining an adverse critic, and see how the historic error impacts the codec's future behavior by History Advantage-a quantity obtained by subtracting the adverse critic from the gold reward of ground-truth history. Moreover, to make the codec more sensitive to the history, we propose a novel attention network called History-Aware Co-Attention Network (HACAN) which can be effectively trained by using HAST. Experimental results on three benchmarks: VisDial v0.9&v1.0 and GuessWhat?!, show that the proposed HAST strategy consistently outperforms the state-of-the-art supervised counterparts.

Results

TaskDatasetMetricValueModel
DialogueVisDial v0.9 valMRR0.6792HACAN
DialogueVisDial v0.9 valMean Rank3.97HACAN
DialogueVisDial v0.9 valR@154.76HACAN
DialogueVisDial v0.9 valR@1090.68HACAN
DialogueVisDial v0.9 valR@583.03HACAN
DialogueVisual Dialog v1.0 test-stdMRR (x 100)64.22HACAN
DialogueVisual Dialog v1.0 test-stdMean4.2HACAN
DialogueVisual Dialog v1.0 test-stdNDCG (x 100)57.17HACAN
DialogueVisual Dialog v1.0 test-stdR@150.88HACAN
DialogueVisual Dialog v1.0 test-stdR@1089.45HACAN
DialogueVisual Dialog v1.0 test-stdR@580.63HACAN
Visual DialogVisDial v0.9 valMRR0.6792HACAN
Visual DialogVisDial v0.9 valMean Rank3.97HACAN
Visual DialogVisDial v0.9 valR@154.76HACAN
Visual DialogVisDial v0.9 valR@1090.68HACAN
Visual DialogVisDial v0.9 valR@583.03HACAN
Visual DialogVisual Dialog v1.0 test-stdMRR (x 100)64.22HACAN
Visual DialogVisual Dialog v1.0 test-stdMean4.2HACAN
Visual DialogVisual Dialog v1.0 test-stdNDCG (x 100)57.17HACAN
Visual DialogVisual Dialog v1.0 test-stdR@150.88HACAN
Visual DialogVisual Dialog v1.0 test-stdR@1089.45HACAN
Visual DialogVisual Dialog v1.0 test-stdR@580.63HACAN

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17