TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Bridging Global Context Interactions for High-Fidelity Ima...

Bridging Global Context Interactions for High-Fidelity Image Completion

Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai, Dinh Phung

2021-04-02CVPR 2022 1Vocal Bursts Intensity PredictionImage Inpainting
PaperPDFCode(official)

Abstract

Bridging global context interactions correctly is important for high-fidelity image completion with large masks. Previous methods attempting this via deep or large receptive field (RF) convolutions cannot escape from the dominance of nearby interactions, which may be inferior. In this paper, we propose to treat image completion as a directionless sequence-to-sequence prediction task, and deploy a transformer to directly capture long-range dependence in the encoder. Crucially, we employ a restrictive CNN with small and non-overlapping RF for weighted token representation, which allows the transformer to explicitly model the long-range visible context relations with equal importance in all layers, without implicitly confounding neighboring tokens when larger RFs are used. To improve appearance consistency between visible and generated regions, a novel attention-aware layer (AAL) is introduced to better exploit distantly related high-frequency features. Overall, extensive experiments demonstrate superior performance compared to state-of-the-art methods on several datasets.

Results

TaskDatasetMetricValueModel
Image GenerationFFHQ 512 x 512FID3.5TFill
Image GenerationPlaces2FID22.13TFill (20-50% free-form)
Image GenerationPlaces2 valFID15.2TFill (20-30% free form)
Image GenerationPlaces2 valPD87.2TFill (20-30% free form)
Image InpaintingFFHQ 512 x 512FID3.5TFill
Image InpaintingPlaces2FID22.13TFill (20-50% free-form)
Image InpaintingPlaces2 valFID15.2TFill (20-30% free form)
Image InpaintingPlaces2 valPD87.2TFill (20-30% free form)

Related Papers

RePaintGS: Reference-Guided Gaussian Splatting for Realistic and View-Consistent 3D Scene Inpainting2025-07-11MTADiffusion: Mask Text Alignment Diffusion Model for Object Inpainting2025-06-303DeepRep: 3D Deep Low-rank Tensor Representation for Hyperspectral Image Inpainting2025-06-20Geological Field Restoration through the Lens of Image Inpainting2025-06-05DreamDance: Animating Character Art via Inpainting Stable Gaussian Worlds2025-05-30Structure Disruption: Subverting Malicious Diffusion-Based Inpainting via Self-Attention Query Perturbation2025-05-26Unsupervised Raindrop Removal from a Single Image using Conditional Diffusion Models2025-05-13CaRaFFusion: Improving 2D Semantic Segmentation with Camera-Radar Point Cloud Fusion and Zero-Shot Image Inpainting2025-05-06