TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Visual Coreference Resolution in Visual Dialog using Neura...

Visual Coreference Resolution in Visual Dialog using Neural Module Networks

Satwik Kottur, José M. F. Moura, Devi Parikh, Dhruv Batra, Marcus Rohrbach

2018-09-06ECCV 2018 9Visual GroundingVisual Dialogcoreference-resolutionCoreference ResolutionCommon Sense ReasoningVisual Question Answering (VQA)Visual Question Answering
PaperPDFCode

Abstract

Visual dialog entails answering a series of questions grounded in an image, using dialog history as context. In addition to the challenges found in visual question answering (VQA), which can be seen as one-round dialog, visual dialog encompasses several more. We focus on one such problem called visual coreference resolution that involves determining which words, typically noun phrases and pronouns, co-refer to the same entity/object instance in an image. This is crucial, especially for pronouns (e.g., `it'), as the dialog agent must first link it to a previous coreference (e.g., `boat'), and only then can rely on the visual grounding of the coreference `boat' to reason about the pronoun `it'. Prior work (in visual dialog) models visual coreference resolution either (a) implicitly via a memory network over history, or (b) at a coarse level for the entire question; and not explicitly at a phrase level of granularity. In this work, we propose a neural module network architecture for visual dialog by introducing two novel modules - Refer and Exclude - that perform explicit, grounded, coreference resolution at a finer word level. We demonstrate the effectiveness of our model on MNIST Dialog, a visually simple yet coreference-wise complex dataset, by achieving near perfect accuracy, and on VisDial, a large and challenging visual dialog dataset on real images, where our model outperforms other approaches, and is more interpretable, grounded, and consistent qualitatively.

Results

TaskDatasetMetricValueModel
DialogueVisDial v0.9 valMRR64.1CorefNMN (ResNet-152)
DialogueVisDial v0.9 valMean Rank4.45CorefNMN (ResNet-152)
DialogueVisDial v0.9 valR@150.92CorefNMN (ResNet-152)
DialogueVisDial v0.9 valR@1088.81CorefNMN (ResNet-152)
DialogueVisDial v0.9 valR@580.18CorefNMN (ResNet-152)
DialogueVisDial v0.9 valMRR63.6CorefNMN
DialogueVisDial v0.9 valMean Rank4.53CorefNMN
DialogueVisDial v0.9 valR@150.24CorefNMN
DialogueVisDial v0.9 valR@1088.51CorefNMN
DialogueVisDial v0.9 valR@579.81CorefNMN
DialogueVisual Dialog v1.0 test-stdMRR (x 100)61.5CorefNMN (ResNet-152)
DialogueVisual Dialog v1.0 test-stdMean4.4CorefNMN (ResNet-152)
DialogueVisual Dialog v1.0 test-stdNDCG (x 100)54.7CorefNMN (ResNet-152)
DialogueVisual Dialog v1.0 test-stdR@147.55CorefNMN (ResNet-152)
DialogueVisual Dialog v1.0 test-stdR@1088.8CorefNMN (ResNet-152)
DialogueVisual Dialog v1.0 test-stdR@578.1CorefNMN (ResNet-152)
Common Sense ReasoningVisual Dialog v0.91 in 10 R@580.1NMN [kottur2018visual]
Visual DialogVisDial v0.9 valMRR64.1CorefNMN (ResNet-152)
Visual DialogVisDial v0.9 valMean Rank4.45CorefNMN (ResNet-152)
Visual DialogVisDial v0.9 valR@150.92CorefNMN (ResNet-152)
Visual DialogVisDial v0.9 valR@1088.81CorefNMN (ResNet-152)
Visual DialogVisDial v0.9 valR@580.18CorefNMN (ResNet-152)
Visual DialogVisDial v0.9 valMRR63.6CorefNMN
Visual DialogVisDial v0.9 valMean Rank4.53CorefNMN
Visual DialogVisDial v0.9 valR@150.24CorefNMN
Visual DialogVisDial v0.9 valR@1088.51CorefNMN
Visual DialogVisDial v0.9 valR@579.81CorefNMN
Visual DialogVisual Dialog v1.0 test-stdMRR (x 100)61.5CorefNMN (ResNet-152)
Visual DialogVisual Dialog v1.0 test-stdMean4.4CorefNMN (ResNet-152)
Visual DialogVisual Dialog v1.0 test-stdNDCG (x 100)54.7CorefNMN (ResNet-152)
Visual DialogVisual Dialog v1.0 test-stdR@147.55CorefNMN (ResNet-152)
Visual DialogVisual Dialog v1.0 test-stdR@1088.8CorefNMN (ResNet-152)
Visual DialogVisual Dialog v1.0 test-stdR@578.1CorefNMN (ResNet-152)

Related Papers

Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16ViewSRD: 3D Visual Grounding via Structured Multi-View Decomposition2025-07-15VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation2025-07-09A Neural Representation Framework with LLM-Driven Spatial Reasoning for Open-Vocabulary 3D Visual Grounding2025-07-09Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09