TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/BLOCK: Bilinear Superdiagonal Fusion for Visual Question A...

BLOCK: Bilinear Superdiagonal Fusion for Visual Question Answering and Visual Relationship Detection

Hedi Ben-Younes, Rémi Cadene, Nicolas Thome, Matthieu Cord

2019-01-31Question AnsweringRepresentation LearningVisual Relationship DetectionVisual Question Answering (VQA)Relationship DetectionVisual Question Answering
PaperPDFCode(official)

Abstract

Multimodal representation learning is gaining more and more interest within the deep learning community. While bilinear models provide an interesting framework to find subtle combination of modalities, their number of parameters grows quadratically with the input dimensions, making their practical implementation within classical deep learning pipelines challenging. In this paper, we introduce BLOCK, a new multimodal fusion based on the block-superdiagonal tensor decomposition. It leverages the notion of block-term ranks, which generalizes both concepts of rank and mode ranks for tensors, already used for multimodal fusion. It allows to define new ways for optimizing the tradeoff between the expressiveness and complexity of the fusion model, and is able to represent very fine interactions between modalities while maintaining powerful mono-modal representations. We demonstrate the practical interest of our fusion model by using BLOCK for two challenging tasks: Visual Question Answering (VQA) and Visual Relationship Detection (VRD), where we design end-to-end learnable architectures for representing relevant interactions between modalities. Through extensive experiments, we show that BLOCK compares favorably with respect to state-of-the-art multimodal fusion models for both VQA and VRD tasks. Our code is available at https://github.com/Cadene/block.bootstrap.pytorch.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)VQA v2 test-devAccuracy67.58BLOCK
Visual Question Answering (VQA)VQA v2 test-stdoverall67.9BLOCK
Scene ParsingVRD Relationship DetectionR@10020.96BLOCK
Scene ParsingVRD Relationship DetectionR@5019.06BLOCK
Scene ParsingVRD Predicate DetectionR@10092.58BLOCK
Scene ParsingVRD Predicate DetectionR@5086.58BLOCK
Scene ParsingVRD Phrase DetectionR@10028.96BLOCK
Scene ParsingVRD Phrase DetectionR@5026.32BLOCK
Visual Relationship DetectionVRD Relationship DetectionR@10020.96BLOCK
Visual Relationship DetectionVRD Relationship DetectionR@5019.06BLOCK
Visual Relationship DetectionVRD Predicate DetectionR@10092.58BLOCK
Visual Relationship DetectionVRD Predicate DetectionR@5086.58BLOCK
Visual Relationship DetectionVRD Phrase DetectionR@10028.96BLOCK
Visual Relationship DetectionVRD Phrase DetectionR@5026.32BLOCK
Scene UnderstandingVRD Relationship DetectionR@10020.96BLOCK
Scene UnderstandingVRD Relationship DetectionR@5019.06BLOCK
Scene UnderstandingVRD Predicate DetectionR@10092.58BLOCK
Scene UnderstandingVRD Predicate DetectionR@5086.58BLOCK
Scene UnderstandingVRD Phrase DetectionR@10028.96BLOCK
Scene UnderstandingVRD Phrase DetectionR@5026.32BLOCK
2D Semantic SegmentationVRD Relationship DetectionR@10020.96BLOCK
2D Semantic SegmentationVRD Relationship DetectionR@5019.06BLOCK
2D Semantic SegmentationVRD Predicate DetectionR@10092.58BLOCK
2D Semantic SegmentationVRD Predicate DetectionR@5086.58BLOCK
2D Semantic SegmentationVRD Phrase DetectionR@10028.96BLOCK
2D Semantic SegmentationVRD Phrase DetectionR@5026.32BLOCK

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17