TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Sparse and noisy LiDAR completion with RGB guidance and un...

Sparse and noisy LiDAR completion with RGB guidance and uncertainty

Wouter Van Gansbeke, Davy Neven, Bert de Brabandere, Luc van Gool

2019-02-14Autonomous VehiclesDepth CompletionDepth PredictionDepth Estimation
PaperPDFCode(official)

Abstract

This work proposes a new method to accurately complete sparse LiDAR maps guided by RGB images. For autonomous vehicles and robotics the use of LiDAR is indispensable in order to achieve precise depth predictions. A multitude of applications depend on the awareness of their surroundings, and use depth cues to reason and react accordingly. On the one hand, monocular depth prediction methods fail to generate absolute and precise depth maps. On the other hand, stereoscopic approaches are still significantly outperformed by LiDAR based approaches. The goal of the depth completion task is to generate dense depth predictions from sparse and irregular point clouds which are mapped to a 2D plane. We propose a new framework which extracts both global and local information in order to produce proper depth maps. We argue that simple depth completion does not require a deep network. However, we additionally propose a fusion method with RGB guidance from a monocular camera in order to leverage object information and to correct mistakes in the sparse input. This improves the accuracy significantly. Moreover, confidence masks are exploited in order to take into account the uncertainty in the depth predictions from each modality. This fusion method outperforms the state-of-the-art and ranks first on the KITTI depth completion benchmark. Our code with visualizations is available.

Results

TaskDatasetMetricValueModel
Depth CompletionKITTI Depth CompletionMAE215.02FusionNet (RGB_guide&certainty)
Depth CompletionKITTI Depth CompletionRMSE772.87FusionNet (RGB_guide&certainty)
Depth CompletionKITTI Depth CompletionRuntime [ms]20FusionNet (RGB_guide&certainty)
Depth CompletionKITTI Depth CompletioniMAE0.93FusionNet (RGB_guide&certainty)
Depth CompletionKITTI Depth CompletioniRMSE2.19FusionNet (RGB_guide&certainty)

Related Papers

$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16MonoMVSNet: Monocular Priors Guided Multi-View Stereo Network2025-07-15Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15Cameras as Relative Positional Encoding2025-07-14ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way2025-07-11