TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HybridDepth: Robust Metric Depth Fusion by Leveraging Dept...

HybridDepth: Robust Metric Depth Fusion by Leveraging Depth from Focus and Single-Image Priors

Ashkan Ganj, Hang Su, Tian Guo

2024-07-26Zero-shot GeneralizationDepth EstimationMonocular Depth Estimation
PaperPDFCode(official)

Abstract

We propose HYBRIDDEPTH, a robust depth estimation pipeline that addresses key challenges in depth estimation,including scale ambiguity, hardware heterogeneity, and generalizability. HYBRIDDEPTH leverages focal stack, data conveniently accessible in common mobile devices, to produce accurate metric depth maps. By incorporating depth priors afforded by recent advances in singleimage depth estimation, our model achieves a higher level of structural detail compared to existing methods. We test our pipeline as an end-to-end system, with a newly developed mobile client to capture focal stacks, which are then sent to a GPU-powered server for depth estimation. Comprehensive quantitative and qualitative analyses demonstrate that HYBRIDDEPTH outperforms state-of-the-art(SOTA) models on common datasets such as DDFF12 and NYU Depth V2. HYBRIDDEPTH also shows strong zero-shot generalization. When trained on NYU Depth V2, HYBRIDDEPTH surpasses SOTA models in zero-shot performance on ARKitScenes and delivers more structurally accurate depth maps on Mobile Depth. The code is available at https://github.com/cake-lab/HybridDepth/.

Results

TaskDatasetMetricValueModel
Depth EstimationNYU-Depth V2Delta < 1.250.988HybridDepth
Depth EstimationNYU-Depth V2Delta < 1.25^21HybridDepth
Depth EstimationNYU-Depth V2Delta < 1.25^31HybridDepth
Depth EstimationNYU-Depth V2RMSE0.128HybridDepth
Depth EstimationNYU-Depth V2absolute relative error0.026HybridDepth
3DNYU-Depth V2Delta < 1.250.988HybridDepth
3DNYU-Depth V2Delta < 1.25^21HybridDepth
3DNYU-Depth V2Delta < 1.25^31HybridDepth
3DNYU-Depth V2RMSE0.128HybridDepth
3DNYU-Depth V2absolute relative error0.026HybridDepth

Related Papers

$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15MonoMVSNet: Monocular Priors Guided Multi-View Stereo Network2025-07-15Cameras as Relative Positional Encoding2025-07-14