Description
GMVAE, or Gaussian Mixture Variational Autoencoder, is a stochastic regularization layer for transformers. A GMVAE layer is trained using a 700-dimensional internal representation of the first MLP layer. For every output from the first MLP layer, the GMVAE layer first computes a latent low-dimensional representation sampling from the GMVAE posterior distribution to then provide at the output a reconstruction sampled from a generative model.
Papers Using This Method
Physically Interpretable Representation and Controlled Generation for Turbulence Data2025-01-31MARTA: a model for the automatic phonemic grouping of the parkinsonian speech2024-03-19Latent Combinational Game Design2022-06-28Variational embedding of protein folding simulations using gaussian mixture variational autoencoders2021-08-27Regularizing Transformers With Deep Probabilistic Layers2021-08-23