Emmanuel Kahembwe, Subramanian Ramamoorthy
This work presents an analysis of the discriminators used in Generative Adversarial Networks (GANs) for Video. We show that unconstrained video discriminator architectures induce a loss surface with high curvature which make optimisation difficult. We also show that this curvature becomes more extreme as the maximal kernel dimension of video discriminators increases. With these observations in hand, we propose a family of efficient Lower-Dimensional Video Discriminators for GANs (LDVD GANs). The proposed family of discriminators improve the performance of video GAN models they are applied to and demonstrate good performance on complex and diverse datasets such as UCF-101. In particular, we show that they can double the performance of Temporal-GANs and provide for state-of-the-art performance on a single GPU.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | UCF-101 16 frames, Unconditional, Single GPU | Inception Score | 22.91 | TGAN-F |
| Video | UCF-101 16 frames, 128x128, Unconditional | Inception Score | 22.91 | TGAN-F |
| Video | UCF-101 16 frames, 64x64, Unconditional | FID | 8943 | TGAN-F |
| Video | UCF-101 16 frames, 64x64, Unconditional | Inception Score | 13.62 | TGAN-F |
| Video Generation | UCF-101 16 frames, Unconditional, Single GPU | Inception Score | 22.91 | TGAN-F |
| Video Generation | UCF-101 16 frames, 128x128, Unconditional | Inception Score | 22.91 | TGAN-F |
| Video Generation | UCF-101 16 frames, 64x64, Unconditional | FID | 8943 | TGAN-F |
| Video Generation | UCF-101 16 frames, 64x64, Unconditional | Inception Score | 13.62 | TGAN-F |