Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, Aaron Courville
We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image-to-Image Translation | Cityscapes Labels-to-Photo | Class IOU | 0.02 | BiGAN |
| Image-to-Image Translation | Cityscapes Photo-to-Labels | Class IOU | 0.07 | BiGAN |
| Image Generation | Cityscapes Labels-to-Photo | Class IOU | 0.02 | BiGAN |
| Image Generation | Cityscapes Photo-to-Labels | Class IOU | 0.07 | BiGAN |
| 1 Image, 2*2 Stitching | Cityscapes Labels-to-Photo | Class IOU | 0.02 | BiGAN |
| 1 Image, 2*2 Stitching | Cityscapes Photo-to-Labels | Class IOU | 0.07 | BiGAN |