TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Diffusion-GAN: Training GANs with Diffusion

Diffusion-GAN: Training GANs with Diffusion

Zhendong Wang, Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou

2022-06-05Image Generation
PaperPDFCodeCodeCodeCode(official)

Abstract

Generative adversarial networks (GANs) are challenging to train stably, and a promising remedy of injecting instance noise into the discriminator input has not been very effective in practice. In this paper, we propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate Gaussian-mixture distributed instance noise. Diffusion-GAN consists of three components, including an adaptive diffusion process, a diffusion timestep-dependent discriminator, and a generator. Both the observed and generated data are diffused by the same adaptive diffusion process. At each diffusion timestep, there is a different noise-to-data ratio and the timestep-dependent discriminator learns to distinguish the diffused real data from the diffused generated data. The generator learns from the discriminator's feedback by backpropagating through the forward diffusion chain, whose length is adaptively adjusted to balance the noise and data levels. We theoretically show that the discriminator's timestep-dependent strategy gives consistent and helpful guidance to the generator, enabling it to match the true data distribution. We demonstrate the advantages of Diffusion-GAN over strong GAN baselines on various datasets, showing that it can produce more realistic images with higher stability and data efficiency than state-of-the-art GANs.

Results

TaskDatasetMetricValueModel
Image GenerationSTL-10FID6.91Diffusion ProjectedGAN
Image GenerationSTL-10FID11.53Diffusion StyleGAN2
Image GenerationAFHQ WildFID1.51Diffusion InsGen
Image GenerationAFHQ DogFID4.83Diffusion InsGen
Image GenerationAFHQ CatFID2.4Diffusion InsGen
Image GenerationCelebA 64x64FID1.69Diffusion StyleGAN2
Image GenerationLSUN Bedroom 256 x 256FID1.43Diffusion ProjectedGAN
Image GenerationLSUN Bedroom 256 x 256FID3.65Diffusion StyleGAN2
Image GenerationLSUN Bedroom 256 x 256FD547.61Diffusion ProjectedGAN (DINOv2)
Image GenerationLSUN Bedroom 256 x 256Precision0.79Diffusion ProjectedGAN (DINOv2)
Image GenerationLSUN Bedroom 256 x 256Recall0.28Diffusion ProjectedGAN (DINOv2)
Image GenerationFFHQ 1024 x 1024FID2.83Diffusion StyleGAN2
Image GenerationLSUN Churches 256 x 256FID1.85Diffusion ProjectedGAN
Image GenerationLSUN Churches 256 x 256FID3.17Diffusion StyleGAN2

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection2025-07-17FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization2025-07-17A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17FADE: Adversarial Concept Erasure in Flow Models2025-07-16CharaConsist: Fine-Grained Consistent Character Generation2025-07-15CATVis: Context-Aware Thought Visualization2025-07-15