TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DualPoseNet: Category-level 6D Object Pose and Size Estima...

DualPoseNet: Category-level 6D Object Pose and Size Estimation Using Dual Pose Network with Refined Learning of Pose Consistency

Jiehong Lin, Zewei Wei, Zhihao LI, Songcen Xu, Kui Jia, Yuanqing Li

2021-03-11ICCV 2021 10Pose Prediction6D Pose Estimation using RGBD
PaperPDFCode(official)

Abstract

Category-level 6D object pose and size estimation is to predict full pose configurations of rotation, translation, and size for object instances observed in single, arbitrary views of cluttered scenes. In this paper, we propose a new method of Dual Pose Network with refined learning of pose consistency for this task, shortened as DualPoseNet. DualPoseNet stacks two parallel pose decoders on top of a shared pose encoder, where the implicit decoder predicts object poses with a working mechanism different from that of the explicit one; they thus impose complementary supervision on the training of pose encoder. We construct the encoder based on spherical convolutions, and design a module of Spherical Fusion wherein for a better embedding of pose-sensitive features from the appearance and shape observations. Given no testing CAD models, it is the novel introduction of the implicit decoder that enables the refined pose prediction during testing, by enforcing the predicted pose consistency between the two decoders using a self-adaptive loss term. Thorough experiments on benchmarks of both category- and instance-level object pose datasets confirm efficacy of our designs. DualPoseNet outperforms existing methods with a large margin in the regime of high precision. Our code is released publicly at https://github.com/Gorilla-Lab-SCUT/DualPoseNet.

Results

TaskDatasetMetricValueModel
Pose EstimationREAL275mAP 10, 2cm50DualPoseNet
Pose EstimationREAL275mAP 10, 5cm66.8DualPoseNet
Pose EstimationREAL275mAP 3DIou@5079.8DualPoseNet
Pose EstimationREAL275mAP 5, 5cm35.9DualPoseNet
3DREAL275mAP 10, 2cm50DualPoseNet
3DREAL275mAP 10, 5cm66.8DualPoseNet
3DREAL275mAP 3DIou@5079.8DualPoseNet
3DREAL275mAP 5, 5cm35.9DualPoseNet
1 Image, 2*2 StitchiREAL275mAP 10, 2cm50DualPoseNet
1 Image, 2*2 StitchiREAL275mAP 10, 5cm66.8DualPoseNet
1 Image, 2*2 StitchiREAL275mAP 3DIou@5079.8DualPoseNet
1 Image, 2*2 StitchiREAL275mAP 5, 5cm35.9DualPoseNet

Related Papers

EasyInsert: A Data-Efficient and Generalizable Insertion Policy2025-05-22Template-Guided 3D Molecular Pose Generation via Flow Matching and Differentiable Optimization2025-05-22UPTor: Unified 3D Human Pose Dynamics and Trajectory Prediction for Human-Robot Interaction2025-05-20Multi-Resolution Haar Network: Enhancing human motion prediction via Haar transform2025-05-19ZeroGrasp: Zero-Shot Shape Reconstruction Enabled Robotic Grasping2025-04-15FRAME: Floor-aligned Representation for Avatar Motion from Egocentric Video2025-03-29Flow-NeRF: Joint Learning of Geometry, Poses, and Dense Flow within Unified Neural Representations2025-03-13MarsLGPR: Mars Rover Localization with Ground Penetrating Radar2025-03-06