Description
Wasserstein GAN + Gradient Penalty, or WGAN-GP, is a generative adversarial network that uses the Wasserstein loss formulation plus a gradient norm penalty to achieve Lipschitz continuity.
The original WGAN uses weight clipping to achieve 1-Lipschitz functions, but this can lead to undesirable behaviour by creating pathological value surfaces and capacity underuse, as well as gradient explosion/vanishing without careful tuning of the weight clipping parameter .
A Gradient Penalty is a soft version of the Lipschitz constraint, which follows from the fact that functions are 1-Lipschitz iff the gradients are of norm at most 1 everywhere. The squared difference from norm 1 is used as the gradient penalty.
Papers Using This Method
FairGAN: GANs-based Fairness-aware Learning for Recommendations with Implicit Feedback2022-04-25Alleviating Mode Collapse in GAN via Diversity Penalty Module2021-08-05PriorGAN: Real Data Prior for Generative Adversarial Nets2020-06-30Mimicry: Towards the Reproducibility of GAN Research2020-05-05Generating Geological Facies Models with Fidelity to Diversity and Statistics of Training Images using Improved Generative Adversarial Networks2019-09-23How Can We Make GAN Perform Better in Single Medical Image Super-Resolution? A Lesion Focused Multi-Scale Approach2019-01-10Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation2018-04-25Language Modeling with Generative AdversarialNetworks2018-04-08GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium2017-06-26Continual Learning with Deep Generative Replay2017-05-24Improved Training of Wasserstein GANs2017-03-31