TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Crosslingual Generalization through Multitask Finetuning

Crosslingual Generalization through Multitask Finetuning

Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, Colin Raffel

2022-11-03Zero-shot GeneralizationQuestion AnsweringSentence CompletionCoreference ResolutionCross-Lingual TransferZero-Shot Learning
PaperPDFCode(official)

Abstract

Multitask prompted finetuning (MTF) has been shown to help large language models generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused on English data and models. We apply MTF to the pretrained multilingual BLOOM and mT5 model families to produce finetuned variants called BLOOMZ and mT0. We find finetuning large multilingual language models on English tasks with English prompts allows for task generalization to non-English languages that appear only in the pretraining corpus. Finetuning on multilingual tasks with English prompts further improves performance on English and non-English tasks leading to various state-of-the-art zero-shot results. We also investigate finetuning on multilingual tasks with prompts that have been machine-translated from English to match the language of each dataset. We find training on these machine-translated prompts leads to better performance on human-written prompts in the respective languages. Surprisingly, we find models are capable of zero-shot generalization to tasks in languages they have never intentionally seen. We conjecture that the models are learning higher-level capabilities that are both task- and language-agnostic. In addition, we introduce xP3, a composite of supervised datasets in 46 languages with English and machine-translated prompts. Our code, datasets and models are freely available at https://github.com/bigscience-workshop/xmtf.

Results

TaskDatasetMetricValueModel
Question AnsweringStoryClozeAccuracy96.3BLOOMZ
Cross-LingualXCOPAAccuracy84.45mT0-13B
Cross-LingualXCOPAAccuracy75.5BLOOMZ
Coreference ResolutionXWinograd ENAccuracy81.29mT0-13B
Coreference ResolutionXWinograd ENAccuracy69.08BLOOMZ
Coreference ResolutionXWinograd FRAccuracy78.31mT0-13B
Coreference ResolutionXWinograd FRAccuracy68.67BLOOMZ
Cross-Lingual TransferXCOPAAccuracy84.45mT0-13B
Cross-Lingual TransferXCOPAAccuracy75.5BLOOMZ

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Enhancing Cross-task Transfer of Large Language Models via Activation Steering2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16