TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Free-Form Image Inpainting with Gated Convolution

Free-Form Image Inpainting with Gated Convolution

Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, Thomas Huang

2018-06-10ICCV 2019 10feature selectionFormImage Inpainting
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

We present a generative image inpainting system to complete images with free-form mask and guidance. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shape, global and local GANs designed for a single rectangular mask are not applicable. Thus, we also present a patch-based GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminator on dense image patches. SN-PatchGAN is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. Our system helps user quickly remove distracting objects, modify image layouts, clear watermarks and edit faces. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting

Results

TaskDatasetMetricValueModel
Image GenerationPlaces2FID9.27DeepFill v2
Image GenerationPlaces2P-IDS4.01DeepFill v2
Image GenerationPlaces2U-IDS21.32DeepFill v2
Image GenerationPlaces2 valFID13.5DeepFillv2 (20-30% free form)
Image GenerationPlaces2 valPD63DeepFillv2 (20-30% free form)
Image GenerationPlaces2 valFID15.3DeepFillv2 (128×128 center mask)
Image GenerationPlaces2 valPD96.3DeepFillv2 (128×128 center mask)
Image InpaintingPlaces2FID9.27DeepFill v2
Image InpaintingPlaces2P-IDS4.01DeepFill v2
Image InpaintingPlaces2U-IDS21.32DeepFill v2
Image InpaintingPlaces2 valFID13.5DeepFillv2 (20-30% free form)
Image InpaintingPlaces2 valPD63DeepFillv2 (20-30% free form)
Image InpaintingPlaces2 valFID15.3DeepFillv2 (128×128 center mask)
Image InpaintingPlaces2 valPD96.3DeepFillv2 (128×128 center mask)

Related Papers

mNARX+: A surrogate model for complex dynamical systems using manifold-NARX and automatic feature selection2025-07-17Interpretable Bayesian Tensor Network Kernel Machines with Automatic Rank and Feature Selection2025-07-15Lightweight Model for Poultry Disease Detection from Fecal Images Using Multi-Color Space Feature Optimization and Machine Learning2025-07-14FreeAudio: Training-Free Timing Planning for Controllable Long-Form Text-to-Audio Generation2025-07-11RePaintGS: Reference-Guided Gaussian Splatting for Realistic and View-Consistent 3D Scene Inpainting2025-07-11From Motion to Meaning: Biomechanics-Informed Neural Network for Explainable Cardiovascular Disease Identification2025-07-08MTADiffusion: Mask Text Alignment Diffusion Model for Object Inpainting2025-06-30Vulnerability Disclosure through Adaptive Black-Box Adversarial Attacks on NIDS2025-06-25