Eleonora Grassucci, Edoardo Cicero, Danilo Comminiello
Latest Generative Adversarial Networks (GANs) are gathering outstanding results through a large-scale training, thus employing models composed of millions of parameters requiring extensive computational capabilities. Building such huge models undermines their replicability and increases the training instability. Moreover, multi-channel data, such as images or audio, are usually processed by realvalued convolutional networks that flatten and concatenate the input, often losing intra-channel spatial relations. To address these issues related to complexity and information loss, we propose a family of quaternion-valued generative adversarial networks (QGANs). QGANs exploit the properties of quaternion algebra, e.g., the Hamilton product, that allows to process channels as a single entity and capture internal latent relations, while reducing by a factor of 4 the overall number of parameters. We show how to design QGANs and to extend the proposed approach even to advanced models.We compare the proposed QGANs with real-valued counterparts on several image generation benchmarks. Results show that QGANs are able to obtain better FID scores than real-valued GANs and to generate visually pleasing images. Furthermore, QGANs save up to 75% of the training parameters. We believe these results may pave the way to novel, more accessible, GANs capable of improving performance and saving computational resources.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image Generation | STL-10 | FID | 59.611 | QSNGAN |
| Image Generation | STL-10 | Inception score | 4.987 | QSNGAN |
| Image Generation | CelebA-HQ 128x128 | FID | 29.417 | QSNGAN |
| Image Generation | CelebA-HQ 128x128 | IS | 2.249 | QSNGAN |
| Image Generation | Oxford 102 Flowers 128x128 | FID | 115.838 | QSNGAN |
| Image Generation | Oxford 102 Flowers 128x128 | IS | 3 | QSNGAN |
| Image Generation | CIFAR-10 | FID | 31.966 | QSNGAN |