TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Knowledge Distillation

Knowledge Distillation

GeneralIntroduced 20003071 papers
Source Paper

Description

A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel. Source: Distilling the Knowledge in a Neural Network

Papers Using This Method

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14Towards Collaborative Fairness in Federated Learning Under Imbalanced Covariate Shift2025-07-11SFedKD: Sequential Federated Learning with Discrepancy-Aware Multi-Teacher Knowledge Distillation2025-07-11Continual Self-Supervised Learning with Masked Autoencoders in Remote Sensing2025-06-26G$^{2}$D: Boosting Multimodal Learning with Gradient-Guided Distillation2025-06-26Distilling Normalizing Flows2025-06-26FedBKD: Distilled Federated Learning to Embrace Gerneralization and Personalization on Non-IID Data2025-06-25Client Clustering Meets Knowledge Sharing: Enhancing Privacy and Robustness in Personalized Peer-to-Peer Learning2025-06-25Tackling Data Heterogeneity in Federated Learning through Knowledge Distillation with Inequitable Aggregation2025-06-25Towards Scalable and Generalizable Earth Observation Data Mining via Foundation Model Composition2025-06-25Building Lightweight Semantic Segmentation Models for Aerial Images Using Dual Relation Distillation2025-06-25Distillation-Enabled Knowledge Alignment for Generative Semantic Communications in AIGC Provisioning Tasks2025-06-24Recalling The Forgotten Class Memberships: Unlearned Models Can Be Noisy Labelers to Leak Privacy2025-06-24PicoSAM2: Low-Latency Segmentation In-Sensor for Edge Vision Applications2025-06-23Multimodal Fusion SLAM with Fourier Attention2025-06-22Fine-grained Image Retrieval via Dual-Vision Adaptation2025-06-19Factorized RVQ-GAN For Disentangled Speech Tokenization2025-06-18Knowledge Distillation Framework for Accelerating High-Accuracy Neural Network-Based Molecular Dynamics Simulations2025-06-18