Description
Mirror-BERT converts pretrained language models into effective universal text encoders without any supervision, in 20-30 seconds. It is an extremely simple, fast, and effective contrastive learning technique. It relies on fully identical or slightly modified string pairs as positive (i.e., synonymous) fine-tuning examples, and aims to maximise their similarity during identity fine-tuning.
Papers Using This Method
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning2021-11-07Prix-LM: Pretraining for Multilingual Knowledge Base Construction2021-10-16Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models2021-10-15Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations2021-09-27MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models2021-09-19Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders2021-04-16