TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/CSPN++: Learning Context and Resource Aware Convolutional ...

CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion

Xinjing Cheng, Peng Wang, Chenye Guan, Ruigang Yang

2019-11-13Depth CompletionStereo-LiDAR Fusion
PaperPDF

Abstract

Depth Completion deals with the problem of converting a sparse depth map to a dense one, given the corresponding color image. Convolutional spatial propagation network (CSPN) is one of the state-of-the-art (SoTA) methods of depth completion, which recovers structural details of the scene. In this paper, we propose CSPN++, which further improves its effectiveness and efficiency by learning adaptive convolutional kernel sizes and the number of iterations for the propagation, thus the context and computational resources needed at each pixel could be dynamically assigned upon requests. Specifically, we formulate the learning of the two hyper-parameters as an architecture selection problem where various configurations of kernel sizes and numbers of iterations are first defined, and then a set of soft weighting parameters are trained to either properly assemble or select from the pre-defined configurations at each pixel. In our experiments, we find weighted assembling can lead to significant accuracy improvements, which we referred to as "context-aware CSPN", while weighted selection, "resource-aware CSPN" can reduce the computational resource significantly with similar or better accuracy. Besides, the resource needed for CSPN++ can be adjusted w.r.t. the computational budget automatically. Finally, to avoid the side effects of noise or inaccurate sparse depths, we embed a gated network inside CSPN++, which further improves the performance. We demonstrate the effectiveness of CSPN++on the KITTI depth completion benchmark, where it significantly improves over CSPN and other SoTA methods.

Results

TaskDatasetMetricValueModel
Depth EstimationKITTI Depth Completion ValidationRMSE725.43CSPN++
3DKITTI Depth Completion ValidationRMSE725.43CSPN++

Related Papers

PacGDC: Label-Efficient Generalizable Depth Completion with Projection Ambiguity and Consistency2025-07-10DidSee: Diffusion-Based Depth Completion for Material-Agnostic Robotic Perception and Manipulation2025-06-26DCIRNet: Depth Completion with Iterative Refinement for Dexterous Grasping of Transparent and Reflective Objects2025-06-11SR3D: Unleashing Single-view 3D Reconstruction for Transparent and Specular Object Grasping2025-05-30HTMNet: A Hybrid Network with Transformer-Mamba Bottleneck Multimodal Fusion for Transparent and Reflective Objects Depth Completion2025-05-27BadDepth: Backdoor Attacks Against Monocular Depth Estimation in the Physical World2025-05-22Event-Driven Dynamic Scene Depth Completion2025-05-19Depth Anything with Any Prior2025-05-15