TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Contextual Residual Aggregation for Ultra High-Resolution ...

Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting

Zili Yi, Qiang Tang, Shekoofeh Azizi, Daesik Jang, Zhan Xu

2020-05-19CVPR 2020 6Vocal Bursts Intensity PredictionImage Inpainting2k
PaperPDFCodeCode(official)CodeCodeCodeCode

Abstract

Recently data-driven image inpainting methods have made inspiring progress, impacting fundamental image editing tasks such as object removal and damaged image repairing. These methods are more effective than classic approaches, however, due to memory limitations they can only handle low-resolution inputs, typically smaller than 1K. Meanwhile, the resolution of photos captured with mobile devices increases up to 8K. Naive up-sampling of the low-resolution inpainted result can merely yield a large yet blurry result. Whereas, adding a high-frequency residual image onto the large blurry image can generate a sharp result, rich in details and textures. Motivated by this, we propose a Contextual Residual Aggregation (CRA) mechanism that can produce high-frequency residuals for missing contents by weighted aggregating residuals from contextual patches, thus only requiring a low-resolution prediction from the network. Since convolutional layers of the neural network only need to operate on low-resolution inputs and outputs, the cost of memory and computing power is thus well suppressed. Moreover, the need for high-resolution training datasets is alleviated. In our experiments, we train the proposed model on small images with resolutions 512x512 and perform inference on high-resolution images, achieving compelling inpainting quality. Our model can inpaint images as large as 8K with considerable hole sizes, which is intractable with previous learning-based approaches. We further elaborate on the light-weight design of the network architecture, achieving real-time performance on 2K images on a GTX 1080 Ti GPU. Codes are available at: Atlas200dk/sample-imageinpainting-HiFill.

Results

TaskDatasetMetricValueModel
Image GenerationPlaces2FID28.92HFill
Image GenerationPlaces2P-IDS1.24HFill
Image GenerationPlaces2U-IDS11.24HFill
Image GenerationPlaces2 valFID15.7HiFill (20-30% free form)
Image GenerationPlaces2 valPD92.8HiFill (20-30% free form)
Image GenerationPlaces2 valFID16.9HiFill (128×128 center mask)
Image GenerationPlaces2 valPD115.4HiFill (128×128 center mask)
Image InpaintingPlaces2FID28.92HFill
Image InpaintingPlaces2P-IDS1.24HFill
Image InpaintingPlaces2U-IDS11.24HFill
Image InpaintingPlaces2 valFID15.7HiFill (20-30% free form)
Image InpaintingPlaces2 valPD92.8HiFill (20-30% free form)
Image InpaintingPlaces2 valFID16.9HiFill (128×128 center mask)
Image InpaintingPlaces2 valPD115.4HiFill (128×128 center mask)

Related Papers

MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization2025-07-14RePaintGS: Reference-Guided Gaussian Splatting for Realistic and View-Consistent 3D Scene Inpainting2025-07-11MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization2025-07-10Understanding and Improving Length Generalization in Recurrent Models2025-07-03MTADiffusion: Mask Text Alignment Diffusion Model for Object Inpainting2025-06-303DeepRep: 3D Deep Low-rank Tensor Representation for Hyperspectral Image Inpainting2025-06-20A strengthened bound on the number of states required to characterize maximum parsimony distance2025-06-11Structured Variational $D$-Decomposition for Accurate and Stable Low-Rank Approximation2025-06-10