Description
Mixture model network (MoNet) is a general framework allowing to design convolutional deep architectures on non-Euclidean domains such as graphs and manifolds.
Image and description from: Geometric deep learning on graphs and manifolds using mixture model CNNs
Papers Using This Method
Monet: Mixture of Monosemantic Experts for Transformers2024-12-05Towards Scalable Foundation Models for Digital Dermatology2024-11-08Self-Supervised Interpretable End-to-End Learning via Latent Functional Modularity2024-02-21Multilinear Operator Networks2024-01-31MONET: Modality-Embracing Graph Convolutional Network and Target-Aware Attention for Multimedia Recommendation2023-12-15Ablation Study to Clarify the Mechanism of Object Segmentation in Multi-Object Representation Learning2023-10-05The MONET dataset: Multimodal drone thermal dataset recorded in rural scenarios2023-04-11MoNET: Tackle State Momentum via Noise-Enhanced Training for Dialogue State Tracking2022-11-10Multi-Order Networks for Action Unit Detection2022-02-01Joint Detection of Motion Boundaries and Occlusions2021-11-01Complementing the Linear-Programming Learning Experience with the Design and Use of Computerized Games: The Formula 1 Championship Game2021-09-19Motion-guided Non-local Spatial-Temporal Network for Video Crowd Counting2021-04-28Cycle Generative Adversarial Networks Algorithm With Style Transfer For Image Generation2021-01-11Language-Mediated, Object-Centric Representation Learning2020-12-31MoNet: Motion-based Point Cloud Prediction Network2020-11-21Memory Optimization for Deep Networks2020-10-27Efficient, high-performance pancreatic segmentation using multi-scale feature extraction2020-09-02MONET: Debiasing Graph Embeddings via the Metadata-Orthogonal Training Unit2019-09-25GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations2019-07-30Hallucinating Optical Flow Features for Video Classification2019-05-28