TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Rethinking embedding coupling in pre-trained language models

Rethinking embedding coupling in pre-trained language models

Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder

2020-10-24ICLR 2021 1Cross-Lingual Paraphrase IdentificationNatural Language UnderstandingCross-Lingual NERCross-Lingual Question AnsweringNamed Entity Recognition (NER)Cross-Lingual Natural Language Inference
PaperPDFCodeCodeCodeCode

Abstract

We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.

Results

TaskDatasetMetricValueModel
Question AnsweringXQuADEM46.9Decoupled
Question AnsweringXQuADF163.8Decoupled
Question AnsweringXQuADEM46.2Coupled
Question AnsweringXQuADF163.2Coupled
Question AnsweringTyDiQA-GoldPEM42.8Decoupled
Question AnsweringTyDiQA-GoldPF158.1Decoupled
Question AnsweringMLQAEM37.3Coupled
Question AnsweringMLQAF153.1Coupled
Question AnsweringMLQAF153.1Decoupled
Natural Language InferenceXNLIAccuracy71.3Decoupled
Natural Language InferenceXNLIAccuracy70.7Coupled
Cross-LingualNERF169.2Coupled
Cross-LingualNERF168.9Decoupled
Cross-Lingual TransferNERF169.2Coupled
Cross-Lingual TransferNERF168.9Decoupled

Related Papers

Vision Language Action Models in Robotic Manipulation: A Systematic Review2025-07-14Flippi: End To End GenAI Assistant for E-Commerce2025-07-08A Survey on Vision-Language-Action Models for Autonomous Driving2025-06-30State and Memory is All You Need for Robust and Reliable AI Agents2025-06-30Selecting and Merging: Towards Adaptable and Scalable Named Entity Recognition with Large Language Models2025-06-28skLEP: A Slovak General Language Understanding Benchmark2025-06-26SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models2025-06-25Semantic similarity estimation for domain specific data using BERT and other techniques2025-06-23