TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unsupervised Scale-consistent Depth and Ego-motion Learnin...

Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video

Jia-Wang Bian, Zhichao Li, Naiyan Wang, Huangying Zhan, Chunhua Shen, Ming-Ming Cheng, Ian Reid

2019-08-28NeurIPS 2019 12Visual OdometryCamera Pose EstimationDepth EstimationDepth And Camera MotionMonocular Depth Estimation
PaperPDFCode(official)Code

Abstract

Recent work has shown that CNN-based depth and ego-motion estimators can be learned using unlabelled monocular videos. However, the performance is limited by unidentified moving objects that violate the underlying static scene assumption in geometric image reconstruction. More significantly, due to lack of proper constraints, networks output scale-inconsistent results over different samples, i.e., the ego-motion network cannot provide full camera trajectories over a long video sequence because of the per-frame scale ambiguity. This paper tackles these challenges by proposing a geometry consistency loss for scale-consistent predictions and an induced self-discovered mask for handling moving objects and occlusions. Since we do not leverage multi-task learning like recent works, our framework is much simpler and more efficient. Comprehensive evaluation results demonstrate that our depth estimator achieves the state-of-the-art performance on the KITTI dataset. Moreover, we show that our ego-motion network is able to predict a globally scale-consistent camera trajectory for long video sequences, and the resulting visual odometry accuracy is competitive with the recent model that is trained using stereo videos. To the best of our knowledge, this is the first work to show that deep networks trained using unlabelled monocular videos can predict globally scale-consistent camera trajectories over a long video sequence.

Results

TaskDatasetMetricValueModel
Depth EstimationKITTI Eigen splitabsolute relative error0.128SC-SfMLearner_CS+K
Depth EstimationKITTI Eigen splitabsolute relative error0.137SC-SfMLearner
3DKITTI Eigen splitabsolute relative error0.128SC-SfMLearner_CS+K
3DKITTI Eigen splitabsolute relative error0.137SC-SfMLearner
Camera Pose EstimationKITTI Odometry BenchmarkAbsolute Trajectory Error [m]37.61SC-Depth
Camera Pose EstimationKITTI Odometry BenchmarkAverage Rotational Error er[%]5.11SC-Depth
Camera Pose EstimationKITTI Odometry BenchmarkAverage Translational Error et[%]12.2SC-Depth

Related Papers

DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16BRUM: Robust 3D Vehicle Reconstruction from 360 Sparse Images2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16