TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/3D AffordanceNet: A Benchmark for Visual Object Affordance...

3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding

Shengheng Deng, Xun Xu, Chaozheng Wu, Ke Chen, Kui Jia

2021-03-30CVPR 2021 1BenchmarkingAffordance Detection
PaperPDFCode(official)

Abstract

The ability to understand the ways to interact with objects from visual cues, a.k.a. visual affordance, is essential to vision-guided robotic research. This involves categorizing, segmenting and reasoning of visual affordance. Relevant studies in 2D and 2.5D image domains have been made previously, however, a truly functional understanding of object affordance requires learning and prediction in the 3D physical domain, which is still absent in the community. In this work, we present a 3D AffordanceNet dataset, a benchmark of 23k shapes from 23 semantic object categories, annotated with 18 visual affordance categories. Based on this dataset, we provide three benchmarking tasks for evaluating visual affordance understanding, including full-shape, partial-view and rotation-invariant affordance estimations. Three state-of-the-art point cloud deep learning networks are evaluated on all tasks. In addition we also investigate a semi-supervised learning setup to explore the possibility to benefit from unlabeled data. Comprehensive results on our contributed dataset show the promise of visual affordance understanding as a valuable yet challenging benchmark.

Results

TaskDatasetMetricValueModel
Affordance Detection3D AffordanceNetAIOU0.178DGCNN
Affordance Detection3D AffordanceNetmAP0.464DGCNN
Affordance Detection3D AffordanceNet Rotate zAIOU0.161DGCNN
Affordance Detection3D AffordanceNet Rotate zmAP0.448DGCNN
Affordance Detection3D AffordanceNet Rotate SO(3)AIOU0.128DGCNN
Affordance Detection3D AffordanceNet Rotate SO(3)mAP0.373DGCNN
Affordance Detection3D AffordanceNet Partial ViewAIOU0.138DGCNN
Affordance Detection3D AffordanceNet Partial ViewmAP0.422DGCNN

Related Papers

Visual Place Recognition for Large-Scale UAV Applications2025-07-20Training Transformers with Enforced Lipschitz Constants2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16DCR: Quantifying Data Contamination in LLMs Evaluation2025-07-15A Multi-View High-Resolution Foot-Ankle Complex Point Cloud Dataset During Gait for Occlusion-Robust 3D Completion2025-07-15FLsim: A Modular and Library-Agnostic Simulation Framework for Federated Learning2025-07-15