TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Coherent Semantic Attention for Image Inpainting

Coherent Semantic Attention for Image Inpainting

Hongyu Liu, Bin Jiang, Yi Xiao, Chao Yang

2019-05-29ICCV 2019 10Image Inpainting
PaperPDFCodeCode(official)Code

Abstract

The latest deep learning-based approaches have shown promising results for the challenging task of inpainting missing regions of an image. However, the existing methods often generate contents with blurry textures and distorted structures due to the discontinuity of the local pixels. From a semantic-level perspective, the local pixel discontinuity is mainly because these methods ignore the semantic relevance and feature continuity of hole regions. To handle this problem, we investigate the human behavior in repairing pictures and propose a fined deep generative model-based approach with a novel coherent semantic attention (CSA) layer, which can not only preserve contextual structure but also make more effective predictions of missing parts by modeling the semantic relevance between the holes features. The task is divided into rough, refinement as two steps and model each step with a neural network under the U-Net architecture, where the CSA layer is embedded into the encoder of refinement step. To stabilize the network training process and promote the CSA layer to learn more effective parameters, we propose a consistency loss to enforce the both the CSA layer and the corresponding layer of the CSA in decoder to be close to the VGG feature layer of a ground truth image simultaneously. The experiments on CelebA, Places2, and Paris StreetView datasets have validated the effectiveness of our proposed methods in image inpainting tasks and can obtain images with a higher quality as compared with the existing state-of-the-art approaches.

Results

TaskDatasetMetricValueModel
Image GenerationParis StreetView10-20% Mask PSNR32.67Coherent Semantic Attention for Image Inpainting
Image GenerationParis StreetView20-30% Mask PSNR30.32Coherent Semantic Attention for Image Inpainting
Image GenerationParis StreetView30-40% Mask PSNR24.85Coherent Semantic Attention for Image Inpainting
Image GenerationParis StreetView40-50% Mask PSNR23.1Coherent Semantic Attention for Image Inpainting
Image InpaintingParis StreetView10-20% Mask PSNR32.67Coherent Semantic Attention for Image Inpainting
Image InpaintingParis StreetView20-30% Mask PSNR30.32Coherent Semantic Attention for Image Inpainting
Image InpaintingParis StreetView30-40% Mask PSNR24.85Coherent Semantic Attention for Image Inpainting
Image InpaintingParis StreetView40-50% Mask PSNR23.1Coherent Semantic Attention for Image Inpainting

Related Papers

RePaintGS: Reference-Guided Gaussian Splatting for Realistic and View-Consistent 3D Scene Inpainting2025-07-11MTADiffusion: Mask Text Alignment Diffusion Model for Object Inpainting2025-06-303DeepRep: 3D Deep Low-rank Tensor Representation for Hyperspectral Image Inpainting2025-06-20Geological Field Restoration through the Lens of Image Inpainting2025-06-05DreamDance: Animating Character Art via Inpainting Stable Gaussian Worlds2025-05-30Structure Disruption: Subverting Malicious Diffusion-Based Inpainting via Self-Attention Query Perturbation2025-05-26Unsupervised Raindrop Removal from a Single Image using Conditional Diffusion Models2025-05-13CaRaFFusion: Improving 2D Semantic Segmentation with Camera-Radar Point Cloud Fusion and Zero-Shot Image Inpainting2025-05-06