TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Efficient Attention Mechanism for Visual Dialog that can H...

Efficient Attention Mechanism for Visual Dialog that can Handle All the Interactions between Multiple Inputs

Van-Quang Nguyen, Masanori Suganuma, Takayuki Okatani

2019-11-26ECCV 2020 8Visual DialogAll
PaperPDFCode(official)

Abstract

It has been a primary concern in recent studies of vision and language tasks to design an effective attention mechanism dealing with interactions between the two modalities. The Transformer has recently been extended and applied to several bi-modal tasks, yielding promising results. For visual dialog, it becomes necessary to consider interactions between three or more inputs, i.e., an image, a question, and a dialog history, or even its individual dialog components. In this paper, we present a neural architecture named Light-weight Transformer for Many Inputs (LTMI) that can efficiently deal with all the interactions between multiple such inputs in visual dialog. It has a block structure similar to the Transformer and employs the same design of attention computation, whereas it has only a small number of parameters, yet has sufficient representational power for the purpose. Assuming a standard setting of visual dialog, a layer built upon the proposed attention block has less than one-tenth of parameters as compared with its counterpart, a natural Transformer extension. The experimental results on the VisDial datasets validate the effectiveness of the proposed approach, showing improvements of the best NDCG score on the VisDial v1.0 dataset from 57.59 to 60.92 with a single model, from 64.47 to 66.53 with ensemble models, and even to 74.88 with additional finetuning. Our implementation code is available at https://github.com/davidnvq/visdial.

Results

TaskDatasetMetricValueModel
DialogueVisual Dialog v1.0 test-stdMRR (x 100)52.14Ensemble + Finetune
DialogueVisual Dialog v1.0 test-stdMean6.53Ensemble + Finetune
DialogueVisual Dialog v1.0 test-stdNDCG (x 100)74.88Ensemble + Finetune
DialogueVisual Dialog v1.0 test-stdR@138.92Ensemble + Finetune
DialogueVisual Dialog v1.0 test-stdR@1080.65Ensemble + Finetune
DialogueVisual Dialog v1.0 test-stdR@566.6Ensemble + Finetune
Visual DialogVisual Dialog v1.0 test-stdMRR (x 100)52.14Ensemble + Finetune
Visual DialogVisual Dialog v1.0 test-stdMean6.53Ensemble + Finetune
Visual DialogVisual Dialog v1.0 test-stdNDCG (x 100)74.88Ensemble + Finetune
Visual DialogVisual Dialog v1.0 test-stdR@138.92Ensemble + Finetune
Visual DialogVisual Dialog v1.0 test-stdR@1080.65Ensemble + Finetune
Visual DialogVisual Dialog v1.0 test-stdR@566.6Ensemble + Finetune

Related Papers

Modeling Code: Is Text All You Need?2025-07-15All Eyes, no IMU: Learning Flight Attitude from Vision Alone2025-07-15Is Diversity All You Need for Scalable Robotic Manipulation?2025-07-08DESIGN AND IMPLEMENTATION OF ONLINE CLEARANCE REPORT.2025-07-07Is Reasoning All You Need? Probing Bias in the Age of Reasoning Language Models2025-07-03Prompt2SegCXR:Prompt to Segment All Organs and Diseases in Chest X-rays2025-07-01State and Memory is All You Need for Robust and Reliable AI Agents2025-06-30EAMamba: Efficient All-Around Vision State Space Model for Image Restoration2025-06-27