TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unsupervised Depth Completion with Calibrated Backprojecti...

Unsupervised Depth Completion with Calibrated Backprojection Layers

Alex Wong, Stefano Soatto

2021-08-24ICCV 2021 10Depth Completion
PaperPDFCode(official)

Abstract

We propose a deep neural network architecture to infer dense depth from an image and a sparse point cloud. It is trained using a video stream and corresponding synchronized sparse point cloud, as obtained from a LIDAR or other range sensor, along with the intrinsic calibration parameters of the camera. At inference time, the calibration of the camera, which can be different than the one used for training, is fed as an input to the network along with the sparse point cloud and a single image. A Calibrated Backprojection Layer backprojects each pixel in the image to three-dimensional space using the calibration matrix and a depth feature descriptor. The resulting 3D positional encoding is concatenated with the image descriptor and the previous layer output to yield the input to the next layer of the encoder. A decoder, exploiting skip-connections, produces a dense depth map. The resulting Calibrated Backprojection Network, or KBNet, is trained without supervision by minimizing the photometric reprojection error. KBNet imputes missing depth value based on the training set, rather than on generic regularization. We test KBNet on public depth completion benchmarks, where it outperforms the state of the art by 30.5% indoor and 8.8% outdoor when the same camera is used for training and testing. When the test camera is different, the improvement reaches 62%. Code available at: https://github.com/alexklwong/calibrated-backprojection-network.

Results

TaskDatasetMetricValueModel
Depth CompletionKITTI Depth CompletionMAE256.76KBNet
Depth CompletionKITTI Depth CompletionRMSE1069.47KBNet
Depth CompletionKITTI Depth CompletionRuntime [ms]16KBNet
Depth CompletionKITTI Depth CompletioniMAE1.02KBNet
Depth CompletionKITTI Depth CompletioniRMSE2.95KBNet
Depth CompletionVOIDMAE39.8KBNet
Depth CompletionVOIDRMSE95.86KBNet
Depth CompletionVOIDiMAE21.16KBNet
Depth CompletionVOIDiRMSE49.72KBNet

Related Papers

PacGDC: Label-Efficient Generalizable Depth Completion with Projection Ambiguity and Consistency2025-07-10DidSee: Diffusion-Based Depth Completion for Material-Agnostic Robotic Perception and Manipulation2025-06-26DCIRNet: Depth Completion with Iterative Refinement for Dexterous Grasping of Transparent and Reflective Objects2025-06-11SR3D: Unleashing Single-view 3D Reconstruction for Transparent and Specular Object Grasping2025-05-30HTMNet: A Hybrid Network with Transformer-Mamba Bottleneck Multimodal Fusion for Transparent and Reflective Objects Depth Completion2025-05-27BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World2025-05-22Event-Driven Dynamic Scene Depth Completion2025-05-19Depth Anything with Any Prior2025-05-15