TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ByT5: Towards a token-free future with pre-trained byte-to...

ByT5: Towards a token-free future with pre-trained byte-to-byte models

Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel

2021-05-28Cross-Lingual Paraphrase IdentificationQuestion AnsweringCross-Lingual NERExtreme SummarizationCross-Lingual Question AnsweringCross-Lingual Natural Language Inference
PaperPDFCodeCodeCodeCodeCode(official)

Abstract

Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. By comparison, token-free models that operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.

Results

TaskDatasetMetricValueModel
Question AnsweringTweetQABLEU-172ByT5 (small)
Question AnsweringTweetQABLEU-170.8mT5
Question AnsweringTweetQAROUGE-L74.3mT5
Question AnsweringTweetQAROUGE-L75.7ByT5
Question AnsweringXQuADEM63.6ByT5 XXL
Question AnsweringXQuADF179.7ByT5 XXL
Question AnsweringTyDiQA-GoldPEM81.9ByT5 (fine-tuned)
Question AnsweringTyDiQA-GoldPEM60ByT5 XXL
Question AnsweringTyDiQA-GoldPF175.3ByT5 XXL
Question AnsweringMLQAEM54.9ByT5 XXL
Question AnsweringMLQAF171.6ByT5 XXL
Natural Language InferenceXNLIAccuracy83.7ByT5 XXL
Natural Language InferenceXNLIAccuracy69.1ByT5 Small
Cross-LingualWikiAnn NERF167.7ByT5 XXL
Cross-Lingual TransferWikiAnn NERF167.7ByT5 XXL
Extreme SummarizationGEM-XSumBLEU score15.3ByT5
Extreme SummarizationGEM-XSumBLEU score14.3mT5

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Warehouse Spatial Question Answering with LLM Agent2025-07-14Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09