TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Image Inpainting with Learnable Bidirectional Attention Maps

Image Inpainting with Learnable Bidirectional Attention Maps

Chaohao Xie, Shaohui Liu, Chao Li, Ming-Ming Cheng, WangMeng Zuo, Xiao Liu, Shilei Wen, Errui Ding

2019-09-03ICCV 2019 10Image Inpainting
PaperPDFCode(official)

Abstract

Most convolutional network (CNN)-based inpainting methods adopt standard convolution to indistinguishably treat valid pixels and holes, making them limited in handling irregular holes and more likely to generate inpainting results with color discrepancy and blurriness. Partial convolution has been suggested to address this issue, but it adopts handcrafted feature re-normalization, and only considers forward mask-updating. In this paper, we present a learnable attention map module for learning feature renormalization and mask-updating in an end-to-end manner, which is effective in adapting to irregular holes and propagation of convolution layers. Furthermore, learnable reverse attention maps are introduced to allow the decoder of U-Net to concentrate on filling in irregular holes instead of reconstructing both holes and known regions, resulting in our learnable bidirectional attention maps. Qualitative and quantitative experiments show that our method performs favorably against state-of-the-arts in generating sharper, more coherent and visually plausible inpainting results. The source code and pre-trained models will be available.

Results

TaskDatasetMetricValueModel
Image GenerationParis StreetView10-20% Mask PSNR28.73Image Inpainting with Learnable Bidirectional Attention Maps
Image GenerationParis StreetView20-30% Mask PSNR26.16Image Inpainting with Learnable Bidirectional Attention Maps
Image GenerationParis StreetView30-40% Mask PSNR24.26Image Inpainting with Learnable Bidirectional Attention Maps
Image GenerationParis StreetView40-50% Mask PSNR22.62Image Inpainting with Learnable Bidirectional Attention Maps
Image InpaintingParis StreetView10-20% Mask PSNR28.73Image Inpainting with Learnable Bidirectional Attention Maps
Image InpaintingParis StreetView20-30% Mask PSNR26.16Image Inpainting with Learnable Bidirectional Attention Maps
Image InpaintingParis StreetView30-40% Mask PSNR24.26Image Inpainting with Learnable Bidirectional Attention Maps
Image InpaintingParis StreetView40-50% Mask PSNR22.62Image Inpainting with Learnable Bidirectional Attention Maps

Related Papers

RePaintGS: Reference-Guided Gaussian Splatting for Realistic and View-Consistent 3D Scene Inpainting2025-07-11MTADiffusion: Mask Text Alignment Diffusion Model for Object Inpainting2025-06-303DeepRep: 3D Deep Low-rank Tensor Representation for Hyperspectral Image Inpainting2025-06-20Geological Field Restoration through the Lens of Image Inpainting2025-06-05DreamDance: Animating Character Art via Inpainting Stable Gaussian Worlds2025-05-30Structure Disruption: Subverting Malicious Diffusion-Based Inpainting via Self-Attention Query Perturbation2025-05-26Unsupervised Raindrop Removal from a Single Image using Conditional Diffusion Models2025-05-13CaRaFFusion: Improving 2D Semantic Segmentation with Camera-Radar Point Cloud Fusion and Zero-Shot Image Inpainting2025-05-06