TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Bilinear Graph Networks for Visual Question Answering

Bilinear Graph Networks for Visual Question Answering

Dalu Guo, Chang Xu, DaCheng Tao

2019-07-23Question AnsweringVisual Question Answering (VQA)Visual Question Answering
PaperPDF

Abstract

This paper revisits the bilinear attention networks in the visual question answering task from a graph perspective. The classical bilinear attention networks build a bilinear attention map to extract the joint representation of words in the question and objects in the image but lack fully exploring the relationship between words for complex reasoning. In contrast, we develop bilinear graph networks to model the context of the joint embeddings of words and objects. Two kinds of graphs are investigated, namely image-graph and question-graph. The image-graph transfers features of the detected objects to their related query words, enabling the output nodes to have both semantic and factual information. The question-graph exchanges information between these output nodes from image-graph to amplify the implicit yet important relationship between objects. These two kinds of graphs cooperate with each other, and thus our resulting model can model the relationship and dependency between objects, which leads to the realization of multi-step reasoning. Experimental results on the VQA v2.0 validation dataset demonstrate the ability of our method to handle the complex questions. On the test-std set, our best single model achieves state-of-the-art performance, boosting the overall accuracy to 72.41%.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)GQA Test2019Accuracy61.22GRN
Visual Question Answering (VQA)GQA Test2019Binary78.69GRN
Visual Question Answering (VQA)GQA Test2019Consistency90.31GRN
Visual Question Answering (VQA)GQA Test2019Distribution6.77GRN
Visual Question Answering (VQA)GQA Test2019Open45.81GRN
Visual Question Answering (VQA)GQA Test2019Plausibility85.43GRN
Visual Question Answering (VQA)GQA Test2019Validity96.36GRN
Visual Question Answering (VQA)VQA v2 test-stdnumber61.13BGN, ensemble
Visual Question Answering (VQA)VQA v2 test-stdother66.28BGN, ensemble
Visual Question Answering (VQA)VQA v2 test-stdoverall75.92BGN, ensemble
Visual Question Answering (VQA)VQA v2 test-stdyes/no90.89BGN, ensemble

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16