TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ScanQA: 3D Question Answering for Spatial Scene Understand...

ScanQA: 3D Question Answering for Spatial Scene Understanding

Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, Motoaki Kawanabe

2021-12-20CVPR 2022 1Question AnsweringScene UnderstandingSentence EmbeddingsObject Localization3D Question Answering (3D-QA)Visual Question Answering (VQA)
PaperPDFCode(official)

Abstract

We propose a new 3D spatial understanding task of 3D Question Answering (3D-QA). In the 3D-QA task, models receive visual information from the entire 3D scene of the rich RGB-D indoor scan and answer the given textual questions about the 3D scene. Unlike the 2D-question answering of VQA, the conventional 2D-QA models suffer from problems with spatial understanding of object alignment and directions and fail the object identification from the textual questions in 3D-QA. We propose a baseline model for 3D-QA, named ScanQA model, where the model learns a fused descriptor from 3D object proposals and encoded sentence embeddings. This learned descriptor correlates the language expressions with the underlying geometric features of the 3D scan and facilitates the regression of 3D bounding boxes to determine described objects in textual questions and outputs correct answers. We collected human-edited question-answer pairs with free-form answers that are grounded to 3D objects in each 3D scene. Our new ScanQA dataset contains over 40K question-answer pairs from the 800 indoor scenes drawn from the ScanNet dataset. To the best of our knowledge, the proposed 3D-QA task is the first large-scale effort to perform object-grounded question-answering in 3D environments.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)SQA3DExact Match47.2ScanQA
Visual Question Answering (VQA)ScanQA Test w/ objectsBLEU-131.56ScanQA
Visual Question Answering (VQA)ScanQA Test w/ objectsBLEU-412.04ScanQA
Visual Question Answering (VQA)ScanQA Test w/ objectsCIDEr67.29ScanQA
Visual Question Answering (VQA)ScanQA Test w/ objectsExact Match23.45ScanQA
Visual Question Answering (VQA)ScanQA Test w/ objectsMETEOR13.55ScanQA
Visual Question Answering (VQA)ScanQA Test w/ objectsROUGE34.34ScanQA
Visual Question Answering (VQA)ScanQA Test w/ objectsBLEU-127.85ScanRefer+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsBLEU-47.46ScanRefer+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsCIDEr57.56ScanRefer+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsExact Match20.56ScanRefer+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsMETEOR11.97ScanRefer+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsROUGE30.68ScanRefer+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsBLEU-129.46VoteNet+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsBLEU-46.08VoteNet+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsCIDEr58.23VoteNet+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsExact Match19.71VoteNet+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsMETEOR12.07VoteNet+MCAN
Visual Question Answering (VQA)ScanQA Test w/ objectsROUGE30.97VoteNet+MCAN

Related Papers

From Neurons to Semantics: Evaluating Cross-Linguistic Alignment Capabilities of Large Language Models via Neurons Alignment2025-07-20From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Advancing Complex Wide-Area Scene Understanding with Hierarchical Coresets Selection2025-07-17Argus: Leveraging Multiview Images for Improved 3-D Scene Understanding With Large Language Models2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17