TasksSotADatasetsPapersMethodsSubmitAbout
Methods/UL2

UL2

Natural Language ProcessingIntroduced 20008 papers
Source Paper

Description

UL2 is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes.

Papers Using This Method

Efficient Stagewise Pretraining via Progressive Subnetworks2024-02-08TURNA: A Turkish Encoder-Decoder Language Model for Enhanced Understanding and Generation2024-01-25Towards leveraging LLMs for Conditional QA2023-12-02A Zero-shot and Few-shot Study of Instruction-Finetuned Large Language Models Applied to Clinical and Biomedical Tasks2023-07-22mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences2023-05-18ImPaKT: A Dataset for Open-Schema Knowledge Base Construction2022-12-21Recitation-Augmented Language Models2022-10-04UL2: Unifying Language Learning Paradigms2022-05-10
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.