TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SinDiffusion: Learning a Diffusion Model from a Single Nat...

SinDiffusion: Learning a Diffusion Model from a Single Natural Image

Weilun Wang, Jianmin Bao, Wengang Zhou, Dongdong Chen, Dong Chen, Lu Yuan, Houqiang Li

2022-11-22DenoisingImage OutpaintingImage Generation
PaperPDFCode(official)

Abstract

We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image. SinDiffusion significantly improves the quality and diversity of generated samples compared with existing GAN-based approaches. It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales which serves as the default setting in prior work. This avoids the accumulation of errors, which cause characteristic artifacts in generated results. Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics, therefore we redesign the network structure of the diffusion model. Coupling these two designs enables us to generate photorealistic and diverse images from a single image. Furthermore, SinDiffusion can be applied to various applications, i.e., text-guided image generation, and image outpainting, due to the inherent capability of diffusion models. Extensive experiments on a wide range of images demonstrate the superiority of our proposed method for modeling the patch distribution.

Results

TaskDatasetMetricValueModel
Image GenerationPlaces50LPIPS0.387SinDiffusion
Image GenerationPlaces50SIFID0.06SinDiffusion
Image GenerationPlaces50LPIPS0.305ConSinGAN
Image GenerationPlaces50SIFID0.06ConSinGAN
Image GenerationPlaces50LPIPS0.266SinGan
Image GenerationPlaces50SIFID0.09SinGan
Image GenerationPlaces50LPIPS0.256GPNN
Image GenerationPlaces50SIFID0.07GPNN
Image GenerationPlaces50LPIPS0.248ExSinGAN
Image GenerationPlaces50SIFID0.1ExSinGAN

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models2025-07-17Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection2025-07-17FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization2025-07-17A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16FADE: Adversarial Concept Erasure in Flow Models2025-07-16