TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Markerless Camera-to-Robot Pose Estimation via Self-superv...

Markerless Camera-to-Robot Pose Estimation via Self-supervised Sim-to-Real Transfer

Jingpei Lu, Florian Richter, Michael C. Yip

2023-02-28CVPR 2023 1Foreground SegmentationPose EstimationRobot Pose EstimationPose PredictionDeep Learning
PaperPDF

Abstract

Solving the camera-to-robot pose is a fundamental requirement for vision-based robot control, and is a process that takes considerable effort and cares to make accurate. Traditional approaches require modification of the robot via markers, and subsequent deep learning approaches enabled markerless feature extraction. Mainstream deep learning methods only use synthetic data and rely on Domain Randomization to fill the sim-to-real gap, because acquiring the 3D annotation is labor-intensive. In this work, we go beyond the limitation of 3D annotations for real-world data. We propose an end-to-end pose estimation framework that is capable of online camera-to-robot calibration and a self-supervised training method to scale the training to unlabeled real-world data. Our framework combines deep learning and geometric vision for solving the robot pose, and the pipeline is fully differentiable. To train the Camera-to-Robot Pose Estimation Network (CtRNet), we leverage foreground segmentation and differentiable rendering for image-level self-supervision. The pose prediction is visualized through a renderer and the image loss with the input image is back-propagated to train the neural network. Our experimental results on two public real datasets confirm the effectiveness of our approach over existing works. We also integrate our framework into a visual servoing system to demonstrate the promise of real-time precise robot pose estimation for automation tasks.

Results

TaskDatasetMetricValueModel
Pose EstimationDREAM-datasetAUC (avg. on 4 real DREAM datasets)86.4CtrNet (known-joint)
Pose EstimationDREAM-datasetmean-ADD (avg. on 4 real DREAM datasets)19CtrNet (known-joint)
3DDREAM-datasetAUC (avg. on 4 real DREAM datasets)86.4CtrNet (known-joint)
3DDREAM-datasetmean-ADD (avg. on 4 real DREAM datasets)19CtrNet (known-joint)
6D Pose EstimationDREAM-datasetAUC (avg. on 4 real DREAM datasets)86.4CtrNet (known-joint)
6D Pose EstimationDREAM-datasetmean-ADD (avg. on 4 real DREAM datasets)19CtrNet (known-joint)
1 Image, 2*2 StitchiDREAM-datasetAUC (avg. on 4 real DREAM datasets)86.4CtrNet (known-joint)
1 Image, 2*2 StitchiDREAM-datasetmean-ADD (avg. on 4 real DREAM datasets)19CtrNet (known-joint)

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16