TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DVI: Depth Guided Video Inpainting for Autonomous Driving

DVI: Depth Guided Video Inpainting for Autonomous Driving

Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Wei Li, Ruigang Yang

2020-07-17ECCV 2020 8Point Cloud RegistrationImage InpaintingAutonomous DrivingVideo Inpainting
PaperPDFCodeCode(official)

Abstract

To get clear street-view and photo-realistic simulation in autonomous driving, we present an automatic video inpainting algorithm that can remove traffic agents from videos and synthesize missing regions with the guidance of depth/point cloud. By building a dense 3D map from stitched point clouds, frames within a video are geometrically correlated via this common 3D map. In order to fill a target inpainting area in a frame, it is straightforward to transform pixels from other frames into the current one with correct occlusion. Furthermore, we are able to fuse multiple videos through 3D point cloud registration, making it possible to inpaint a target video with multiple source videos. The motivation is to solve the long-time occlusion problem where an occluded area has never been visible in the entire video. To our knowledge, we are the first to fuse multiple videos for video inpainting. To verify the effectiveness of our approach, we build a large inpainting dataset in the real urban road environment with synchronized images and Lidar data including many challenge scenes, e.g., long time occlusion. The experimental results show that the proposed approach outperforms the state-of-the-art approaches for all the criteria, especially the RMSE (Root Mean Squared Error) has been reduced by about 13%.

Results

TaskDatasetMetricValueModel
Image GenerationApolloscape InpaintingRMSE9.633DVI
Image GenerationApolloScapeMAE6.135DVI
Image GenerationApolloScapePSNR21.631DVI
Image GenerationApolloScapeRMSE9.633DVI
Image GenerationApolloScapeSSIM0.895DVI
Image InpaintingApolloscape InpaintingRMSE9.633DVI
Image InpaintingApolloScapeMAE6.135DVI
Image InpaintingApolloScapePSNR21.631DVI
Image InpaintingApolloScapeRMSE9.633DVI
Image InpaintingApolloScapeSSIM0.895DVI

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17A Multi-Level Similarity Approach for Single-View Object Grasping: Matching, Planning, and Fine-Tuning2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16