TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/UDA-COPE: Unsupervised Domain Adaptation for Category-leve...

UDA-COPE: Unsupervised Domain Adaptation for Category-level Object Pose Estimation

Taeyeop Lee, Byeong-Uk Lee, Inkyu Shin, Jaesung Choe, Ukcheol Shin, In So Kweon, Kuk-Jin Yoon

2021-11-24CVPR 2022 1Self-Supervised LearningPose EstimationUnsupervised Domain Adaptation6D Pose Estimation using RGBDDomain Adaptation
PaperPDF

Abstract

Learning to estimate object pose often requires ground-truth (GT) labels, such as CAD model and absolute-scale object pose, which is expensive and laborious to obtain in the real world. To tackle this problem, we propose an unsupervised domain adaptation (UDA) for category-level object pose estimation, called UDA-COPE. Inspired by recent multi-modal UDA techniques, the proposed method exploits a teacher-student self-supervised learning scheme to train a pose estimation network without using target domain pose labels. We also introduce a bidirectional filtering method between the predicted normalized object coordinate space (NOCS) map and observed point cloud, to not only make our teacher network more robust to the target domain but also to provide more reliable pseudo labels for the student network training. Extensive experimental results demonstrate the effectiveness of our proposed method both quantitatively and qualitatively. Notably, without leveraging target-domain GT labels, our proposed method achieved comparable or sometimes superior performance to existing methods that depend on the GT labels.

Results

TaskDatasetMetricValueModel
Pose EstimationREAL275mAP 10, 2cm56.9UDA-COPE
Pose EstimationREAL275mAP 10, 5cm66UDA-COPE
Pose EstimationREAL275mAP 3DIou@5082.6UDA-COPE
Pose EstimationREAL275mAP 3DIou@7562.5UDA-COPE
Pose EstimationREAL275mAP 5, 2cm30.4UDA-COPE
Pose EstimationREAL275mAP 5, 5cm34.8UDA-COPE
3DREAL275mAP 10, 2cm56.9UDA-COPE
3DREAL275mAP 10, 5cm66UDA-COPE
3DREAL275mAP 3DIou@5082.6UDA-COPE
3DREAL275mAP 3DIou@7562.5UDA-COPE
3DREAL275mAP 5, 2cm30.4UDA-COPE
3DREAL275mAP 5, 5cm34.8UDA-COPE
1 Image, 2*2 StitchiREAL275mAP 10, 2cm56.9UDA-COPE
1 Image, 2*2 StitchiREAL275mAP 10, 5cm66UDA-COPE
1 Image, 2*2 StitchiREAL275mAP 3DIou@5082.6UDA-COPE
1 Image, 2*2 StitchiREAL275mAP 3DIou@7562.5UDA-COPE
1 Image, 2*2 StitchiREAL275mAP 5, 2cm30.4UDA-COPE
1 Image, 2*2 StitchiREAL275mAP 5, 5cm34.8UDA-COPE

Related Papers

A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16