TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/D3Net: A Unified Speaker-Listener Architecture for 3D Dens...

D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding

Dave Zhenyu Chen, Qirui Wu, Matthias Nießner, Angel X. Chang

2021-12-02Visual Grounding3D dense captioningCaption Generation3D visual groundingDense Captioning
PaperPDF

Abstract

Recent studies on dense captioning and visual grounding in 3D have achieved impressive results. Despite developments in both areas, the limited amount of available 3D vision-language data causes overfitting issues for 3D visual grounding and 3D dense captioning methods. Also, how to discriminatively describe objects in complex 3D environments is not fully studied yet. To address these challenges, we present D3Net, an end-to-end neural speaker-listener architecture that can detect, describe and discriminate. Our D3Net unifies dense captioning and visual grounding in 3D in a self-critical manner. This self-critical property of D3Net also introduces discriminability during object caption generation and enables semi-supervised training on ScanNet data with partially annotated descriptions. Our method outperforms SOTA methods in both tasks on the ScanRefer dataset, surpassing the SOTA 3D dense captioning method by a significant margin.

Results

TaskDatasetMetricValueModel
Image CaptioningNr3DBLEU-420.7D3Net
Image CaptioningNr3DCIDEr33.85D3Net
Image CaptioningNr3DMETEOR23.13D3Net
Image CaptioningNr3DROUGE-L53.38D3Net

Related Papers

ViewSRD: 3D Visual Grounding via Structured Multi-View Decomposition2025-07-15VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation2025-07-09A Neural Representation Framework with LLM-Driven Spatial Reasoning for Open-Vocabulary 3D Visual Grounding2025-07-09GNN-ViTCap: GNN-Enhanced Multiple Instance Learning with Vision Transformers for Whole Slide Image Classification and Captioning2025-07-09High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning2025-07-08GTA1: GUI Test-time Scaling Agent2025-07-08DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World2025-06-30SPAZER: Spatial-Semantic Progressive Reasoning Agent for Zero-shot 3D Visual Grounding2025-06-27