TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Monocular Depth Estimation through Virtual-world Supervisi...

Monocular Depth Estimation through Virtual-world Supervision and Real-world SfM Self-Supervision

Akhil Gurram, Ahmet Faruk Tuna, Fengyi Shen, Onay Urfalioglu, Antonio M. López

2021-03-22Autonomous DrivingDepth EstimationMonocular Depth Estimation
PaperPDFCode(official)

Abstract

Depth information is essential for on-board perception in autonomous driving and driver assistance. Monocular depth estimation (MDE) is very appealing since it allows for appearance and depth being on direct pixelwise correspondence without further calibration. Best MDE models are based on Convolutional Neural Networks (CNNs) trained in a supervised manner, i.e., assuming pixelwise ground truth (GT). Usually, this GT is acquired at training time through a calibrated multi-modal suite of sensors. However, also using only a monocular system at training time is cheaper and more scalable. This is possible by relying on structure-from-motion (SfM) principles to generate self-supervision. Nevertheless, problems of camouflaged objects, visibility changes, static-camera intervals, textureless areas, and scale ambiguity, diminish the usefulness of such self-supervision. In this paper, we perform monocular depth estimation by virtual-world supervision (MonoDEVS) and real-world SfM self-supervision. We compensate the SfM self-supervision limitations by leveraging virtual-world images with accurate semantic and depth supervision and addressing the virtual-to-real domain gap. Our MonoDEVSNet outperforms previous MDE CNNs trained on monocular and even stereo sequences.

Results

TaskDatasetMetricValueModel
Depth EstimationKITTI Eigen splitDelta < 1.250.969MonoDELSNet
Depth EstimationKITTI Eigen splitDelta < 1.25^20.996MonoDELSNet
Depth EstimationKITTI Eigen splitDelta < 1.25^30.999MonoDELSNet
Depth EstimationKITTI Eigen splitRMSE2.101MonoDELSNet
Depth EstimationKITTI Eigen splitRMSE log0.082MonoDELSNet
Depth EstimationKITTI Eigen splitSq Rel0.161MonoDELSNet
Depth EstimationKITTI Eigen splitabsolute relative error0.053MonoDELSNet
Depth EstimationKITTI Eigen split unsupervisedDelta < 1.250.882MonoDEVSNet
Depth EstimationKITTI Eigen split unsupervisedDelta < 1.25^20.962MonoDEVSNet
Depth EstimationKITTI Eigen split unsupervisedRMSE4.413MonoDEVSNet
Depth EstimationKITTI Eigen split unsupervisedSq Rel0.703MonoDEVSNet
Depth EstimationKITTI Eigen split unsupervisedabsolute relative error0.101MonoDEVSNet
3DKITTI Eigen splitDelta < 1.250.969MonoDELSNet
3DKITTI Eigen splitDelta < 1.25^20.996MonoDELSNet
3DKITTI Eigen splitDelta < 1.25^30.999MonoDELSNet
3DKITTI Eigen splitRMSE2.101MonoDELSNet
3DKITTI Eigen splitRMSE log0.082MonoDELSNet
3DKITTI Eigen splitSq Rel0.161MonoDELSNet
3DKITTI Eigen splitabsolute relative error0.053MonoDELSNet
3DKITTI Eigen split unsupervisedDelta < 1.250.882MonoDEVSNet
3DKITTI Eigen split unsupervisedDelta < 1.25^20.962MonoDEVSNet
3DKITTI Eigen split unsupervisedRMSE4.413MonoDEVSNet
3DKITTI Eigen split unsupervisedSq Rel0.703MonoDEVSNet
3DKITTI Eigen split unsupervisedabsolute relative error0.101MonoDEVSNet

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17