TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Towards Robust Monocular Depth Estimation: Mixing Datasets...

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer

René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, Vladlen Koltun

2019-07-02Depth EstimationMonocular Depth Estimation
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCodeCode

Abstract

The success of monocular depth estimation relies on large and diverse training sets. Due to the challenges associated with acquiring dense ground-truth depth across different environments at scale, a number of datasets with distinct characteristics and biases have emerged. We develop tools that enable mixing multiple datasets during training, even if their annotations are incompatible. In particular, we propose a robust training objective that is invariant to changes in depth range and scale, advocate the use of principled multi-objective learning to combine data from different sources, and highlight the importance of pretraining encoders on auxiliary tasks. Armed with these tools, we experiment with five diverse training datasets, including a new, massive data source: 3D films. To demonstrate the generalization power of our approach we use zero-shot cross-dataset transfer}, i.e. we evaluate on datasets that were not seen during training. The experiments confirm that mixing data from complementary sources greatly improves monocular depth estimation. Our approach clearly outperforms competing methods across diverse datasets, setting a new state of the art for monocular depth estimation. Some results are shown in the supplementary video at https://youtu.be/D46FzVyL9I8

Results

TaskDatasetMetricValueModel
Depth EstimationDCMAbs Rel0.309MIDAS
Depth EstimationDCMRMSE1.033MIDAS
Depth EstimationDCMRMSE log0.375MIDAS
Depth EstimationDCMSq Rel0.381MIDAS
Depth EstimationeBDthequeAbs Rel0.419MIDAS
Depth EstimationeBDthequeRMSE1.416MIDAS
Depth EstimationeBDthequeRMSE log0.659MIDAS
Depth EstimationeBDthequeSq Rel0.503MIDAS
Depth EstimationETH3DDelta < 1.250.0752MiDaS
Depth EstimationETH3Dabsolute relative error0.0184MiDaS
3DDCMAbs Rel0.309MIDAS
3DDCMRMSE1.033MIDAS
3DDCMRMSE log0.375MIDAS
3DDCMSq Rel0.381MIDAS
3DeBDthequeAbs Rel0.419MIDAS
3DeBDthequeRMSE1.416MIDAS
3DeBDthequeRMSE log0.659MIDAS
3DeBDthequeSq Rel0.503MIDAS
3DETH3DDelta < 1.250.0752MiDaS
3DETH3Dabsolute relative error0.0184MiDaS

Related Papers

$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16MonoMVSNet: Monocular Priors Guided Multi-View Stereo Network2025-07-15Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15Cameras as Relative Positional Encoding2025-07-14ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way2025-07-11