Sijin Chen, Hongyuan Zhu, Xin Chen, Yinjie Lei, Tao Chen, Gang Yu
3D dense captioning aims to generate multiple captions localized with their associated object regions. Existing methods follow a sophisticated ``detect-then-describe'' pipeline equipped with numerous hand-crafted components. However, these hand-crafted components would yield suboptimal performance given cluttered object spatial and class distributions among different scenes. In this paper, we propose a simple-yet-effective transformer framework Vote2Cap-DETR based on recent popular \textbf{DE}tection \textbf{TR}ansformer (DETR). Compared with prior arts, our framework has several appealing advantages: 1) Without resorting to numerous hand-crafted components, our method is based on a full transformer encoder-decoder architecture with a learnable vote query driven object decoder, and a caption decoder that produces the dense captions in a set-prediction manner. 2) In contrast to the two-stage scheme, our method can perform detection and captioning in one-stage. 3) Without bells and whistles, extensive experiments on two commonly used datasets, ScanRefer and Nr3D, demonstrate that our Vote2Cap-DETR surpasses current state-of-the-arts by 11.13\% and 7.11\% in CIDEr@0.5IoU, respectively. Codes will be released soon.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image Captioning | ScanRefer Dataset | BLEU-4 | 39.34 | Vote2Cap-DETR |
| Image Captioning | ScanRefer Dataset | CIDEr | 71.45 | Vote2Cap-DETR |
| Image Captioning | ScanRefer Dataset | METEOR | 28.25 | Vote2Cap-DETR |
| Image Captioning | ScanRefer Dataset | ROUGE-L | 59.33 | Vote2Cap-DETR |
| Image Captioning | Nr3D | BLEU-4 | 26.68 | Vote2Cap-DETR |
| Image Captioning | Nr3D | CIDEr | 43.84 | Vote2Cap-DETR |
| Image Captioning | Nr3D | METEOR | 25.41 | Vote2Cap-DETR |
| Image Captioning | Nr3D | ROUGE-L | 54.43 | Vote2Cap-DETR |