Description
MobileBERT is a type of inverted-bottleneck BERT that compresses and accelerates the popular BERT model. MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. It is trained by layer-to-layer imitating the inverted bottleneck BERT.
Papers Using This Method
Efficient Intent-Based Filtering for Multi-Party Conversations Using Knowledge Distillation from LLMs2025-03-21FedMentalCare: Towards Privacy-Preserving Fine-Tuned LLMs to Analyze Mental Health Status Using Federated Learning Framework2025-02-27Resource-Efficient Transformer Architecture: Optimizing Memory and Execution Time for Real-Time Applications2024-12-25Efficient Deployment of Transformer Models in Analog In-Memory Computing Hardware2024-11-26On-Device Emoji Classifier Trained with GPT-based Data Augmentation for a Mobile Keyboard2024-11-06PhishLang: A Real-Time, Fully Client-Side Phishing Detection Framework Using MobileBERT2024-08-11Toward Attention-based TinyML: A Heterogeneous Accelerated Architecture and Automated Deployment Flow2024-08-05Quantized Transformer Language Model Implementations on Edge Devices2023-10-06AutoDistill: an End-to-End Framework to Explore and Distill Hardware-Efficient Language Models2022-01-21Character-level HyperNetworks for Hate Speech Detection2021-11-11LIDSNet: A Lightweight on-device Intent Detection model using Deep Siamese Network2021-10-06MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices2020-04-06