Description
BLOOMZ is a Multitask prompted finetuning (MTF) variant of BLOOM.
Papers Using This Method
Can Prompting LLMs Unlock Hate Speech Detection across Languages? A Zero-shot and Few-shot Study2025-05-09Benchmarking the Performance of Pre-trained LLMs across Urdu NLP Tasks2024-05-24MoZIP: A Multilingual Benchmark to Evaluate Large Language Models in Intellectual Property2024-02-26LexC-Gen: Generating Data for Extremely Low-Resource Languages with Large Language Models and Bilingual Lexicons2024-02-21ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic2024-02-20Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model2024-02-12Zero-shot Sentiment Analysis in Low-Resource Languages Using a Multilingual Sentiment Lexicon2024-02-03Tuning LLMs with Contrastive Alignment Instructions for Machine Translation in Unseen, Low-resource Languages2024-01-11Crosslingual Retrieval Augmented In-context Learning for Bangla2023-11-01The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages2023-10-23Large Language Models Only Pass Primary School Exams in Indonesia: A Comprehensive Test on IndoMMLU2023-10-07Flesch or Fumble? Evaluating Readability Standard Alignment of Instruction-Tuned Language Models2023-09-11Efficient Finetuning Large Language Models For Vietnamese Chatbot2023-09-09Translate Meanings, Not Just Words: IdiomKB's Role in Optimizing Idiomatic Translation with Language Models2023-08-26An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning2023-08-17EcomGPT: Instruction-tuning Large Language Models with Chain-of-Task Tasks for E-commerce2023-08-14Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Dataset for Pre-training and Benchmarks2023-06-07shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation2023-06-05LAraBench: Benchmarking Arabic AI with Large Language Models2023-05-24Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling2023-04-18