TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving Visual Relation Detection using Depth Maps

Improving Visual Relation Detection using Depth Maps

Sahand Sharifzadeh, Sina Moayed Baharlou, Max Berrendorf, Rajat Koner, Volker Tresp

2019-05-02Visual Relationship DetectionRelationship Detection
PaperPDFCode(official)

Abstract

Visual relation detection methods rely on object information extracted from RGB images such as 2D bounding boxes, feature maps, and predicted class probabilities. We argue that depth maps can additionally provide valuable information on object relations, e.g. helping to detect not only spatial relations, such as standing behind, but also non-spatial relations, such as holding. In this work, we study the effect of using different object features with a focus on depth maps. To enable this study, we release a new synthetic dataset of depth maps, VG-Depth, as an extension to Visual Genome (VG). We also note that given the highly imbalanced distribution of relations in VG, typical evaluation metrics for visual relation detection cannot reveal improvements of under-represented relations. To address this problem, we propose using an additional metric, calling it Macro Recall@K, and demonstrate its remarkable performance on VG. Finally, our experiments confirm that by effective utilization of depth maps within a simple, yet competitive framework, the performance of visual relation detection can be improved by a margin of up to 8%.

Results

TaskDatasetMetricValueModel
Scene ParsingVRDR@50 k=115Ours - v
Visual Relationship DetectionVRDR@50 k=115Ours - v
Scene UnderstandingVRDR@50 k=115Ours - v
2D Semantic SegmentationVRDR@50 k=115Ours - v

Related Papers

METOR: A Unified Framework for Mutual Enhancement of Objects and Relationships in Open-vocabulary Video Visual Relationship Detection2025-05-10End-to-end Open-vocabulary Video Visual Relationship Detection using Multi-modal Prompting2024-09-19A Review of Human-Object Interaction Detection2024-08-20Hire: Hybrid-modal Interaction with Multiple Relational Enhancements for Image-Text Matching2024-06-05AUG: A New Dataset and An Efficient Model for Aerial Image Urban Scene Graph Generation2024-04-11Groupwise Query Specialization and Quality-Aware Multi-Assignment for Transformer-based Visual Relationship Detection2024-03-26Scene-Graph ViT: End-to-End Open-Vocabulary Visual Relationship Detection2024-03-21Video Relationship Detection Using Mixture of Experts2024-03-06