TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DepthFormer: Exploiting Long-Range Correlation and Local I...

DepthFormer: Exploiting Long-Range Correlation and Local Information for Accurate Monocular Depth Estimation

Zhenyu Li, Zehui Chen, Xianming Liu, Junjun Jiang

2022-03-27Depth EstimationMonocular Depth Estimation
PaperPDFCode(official)

Abstract

This paper aims to address the problem of supervised monocular depth estimation. We start with a meticulous pilot study to demonstrate that the long-range correlation is essential for accurate depth estimation. Therefore, we propose to leverage the Transformer to model this global context with an effective attention mechanism. We also adopt an additional convolution branch to preserve the local information as the Transformer lacks the spatial inductive bias in modeling such contents. However, independent branches lead to a shortage of connections between features. To bridge this gap, we design a hierarchical aggregation and heterogeneous interaction module to enhance the Transformer features via element-wise interaction and model the affinity between the Transformer and the CNN features in a set-to-set translation manner. Due to the unbearable memory cost caused by global attention on high-resolution feature maps, we introduce the deformable scheme to reduce the complexity. Extensive experiments on the KITTI, NYU, and SUN RGB-D datasets demonstrate that our proposed model, termed DepthFormer, surpasses state-of-the-art monocular depth estimation methods with prominent margins. Notably, it achieves the most competitive result on the highly competitive KITTI depth estimation benchmark. Our codes and models are available at https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox.

Results

TaskDatasetMetricValueModel
Depth EstimationNYU-Depth V2Delta < 1.250.921DepthFormer
Depth EstimationNYU-Depth V2Delta < 1.25^20.989DepthFormer
Depth EstimationNYU-Depth V2Delta < 1.25^30.998DepthFormer
Depth EstimationNYU-Depth V2RMSE0.339DepthFormer
Depth EstimationNYU-Depth V2absolute relative error0.096DepthFormer
Depth EstimationNYU-Depth V2log 100.041DepthFormer
Depth EstimationKITTI Eigen splitDelta < 1.250.975DepthFormer
Depth EstimationKITTI Eigen splitDelta < 1.25^20.997DepthFormer
Depth EstimationKITTI Eigen splitDelta < 1.25^30.999DepthFormer
Depth EstimationKITTI Eigen splitRMSE2.143DepthFormer
Depth EstimationKITTI Eigen splitRMSE log0.079DepthFormer
Depth EstimationKITTI Eigen splitSq Rel0.158DepthFormer
Depth EstimationKITTI Eigen splitabsolute relative error0.052DepthFormer
3DNYU-Depth V2Delta < 1.250.921DepthFormer
3DNYU-Depth V2Delta < 1.25^20.989DepthFormer
3DNYU-Depth V2Delta < 1.25^30.998DepthFormer
3DNYU-Depth V2RMSE0.339DepthFormer
3DNYU-Depth V2absolute relative error0.096DepthFormer
3DNYU-Depth V2log 100.041DepthFormer
3DKITTI Eigen splitDelta < 1.250.975DepthFormer
3DKITTI Eigen splitDelta < 1.25^20.997DepthFormer
3DKITTI Eigen splitDelta < 1.25^30.999DepthFormer
3DKITTI Eigen splitRMSE2.143DepthFormer
3DKITTI Eigen splitRMSE log0.079DepthFormer
3DKITTI Eigen splitSq Rel0.158DepthFormer
3DKITTI Eigen splitabsolute relative error0.052DepthFormer

Related Papers

$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16MonoMVSNet: Monocular Priors Guided Multi-View Stereo Network2025-07-15Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15Cameras as Relative Positional Encoding2025-07-14ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way2025-07-11