TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/LOGAN

LOGAN

Computer VisionIntroduced 20006 papers
Source Paper

Description

LOGAN is a generative adversarial network that uses a latent optimization approach using natural gradient descent (NGD). For the Fisher matrix in NGD, the authors use the empirical Fisher F′F'F′ with Tikhonov damping:

F′=g⋅gT+βIF' = g \cdot g^{T} + \beta{I}F′=g⋅gT+βI

They also use Euclidian Norm regularization for the optimization step.

For LOGAN's base architecture, BigGAN-deep is used with a few modifications: increasing the size of the latent source from 186186186 to 256256256, to compensate the randomness of the source lost when optimising zzz. 2, using the uniform distribution U(−1,1)U\left(−1, 1\right)U(−1,1) instead of the standard normal distribution N(0,1)N\left(0, 1\right)N(0,1) for p(z)p\left(z\right)p(z) to be consistent with the clipping operation, using leaky ReLU (with the slope of 0.2 for the negative part) instead of ReLU as the non-linearity for smoother gradient flow for δf(z)δz\frac{\delta{f}\left(z\right)}{\delta{z}}δzδf(z)​ .

Papers Using This Method

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN2022-11-16Sinogram Denoise Based on Generative Adversarial Networks2021-08-09Direct Reconstruction of Linear Parametric Images from Dynamic PET Using Nonlocal Deep Image Prior2021-06-18LOGAN: Local Group Bias Detection by Clustering2020-10-06Allpass Feedback Delay Networks2020-07-14LOGAN: Latent Optimisation for Generative Adversarial Networks2019-12-02