TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Two can play this Game: Visual Dialog with Discriminative ...

Two can play this Game: Visual Dialog with Discriminative Question Generation and Answering

Unnat Jain, Svetlana Lazebnik, Alexander Schwing

2018-03-29CVPR 2018 6Question AnsweringVisual DialogImage CaptioningQuestion GenerationVisual Question Answering (VQA)Visual Question Answering
PaperPDF

Abstract

Human conversation is a complex mechanism with subtle nuances. It is hence an ambitious goal to develop artificial intelligence agents that can participate fluently in a conversation. While we are still far from achieving this goal, recent progress in visual question answering, image captioning, and visual question generation shows that dialog systems may be realizable in the not too distant future. To this end, a novel dataset was introduced recently and encouraging results were demonstrated, particularly for question answering. In this paper, we demonstrate a simple symmetric discriminative baseline, that can be applied to both predicting an answer as well as predicting a question. We show that this method performs on par with the state of the art, even memory net based methods. In addition, for the first time on the visual dialog dataset, we assess the performance of a system asking questions, and demonstrate how visual dialog can be generated from discriminative question generation and question answering.

Results

TaskDatasetMetricValueModel
DialogueVisDial v0.9 valMRR62.42SF-QIH-se-2
DialogueVisDial v0.9 valMean Rank4.7SF-QIH-se-2
DialogueVisDial v0.9 valR@148.55SF-QIH-se-2
DialogueVisDial v0.9 valR@1087.75SF-QIH-se-2
DialogueVisDial v0.9 valR@578.96SF-QIH-se-2
Visual DialogVisDial v0.9 valMRR62.42SF-QIH-se-2
Visual DialogVisDial v0.9 valMean Rank4.7SF-QIH-se-2
Visual DialogVisDial v0.9 valR@148.55SF-QIH-se-2
Visual DialogVisDial v0.9 valR@1087.75SF-QIH-se-2
Visual DialogVisDial v0.9 valR@578.96SF-QIH-se-2

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16