Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel
Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. By comparison, token-free models that operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | TweetQA | BLEU-1 | 72 | ByT5 (small) |
| Question Answering | TweetQA | BLEU-1 | 70.8 | mT5 |
| Question Answering | TweetQA | ROUGE-L | 74.3 | mT5 |
| Question Answering | TweetQA | ROUGE-L | 75.7 | ByT5 |
| Question Answering | XQuAD | EM | 63.6 | ByT5 XXL |
| Question Answering | XQuAD | F1 | 79.7 | ByT5 XXL |
| Question Answering | TyDiQA-GoldP | EM | 81.9 | ByT5 (fine-tuned) |
| Question Answering | TyDiQA-GoldP | EM | 60 | ByT5 XXL |
| Question Answering | TyDiQA-GoldP | F1 | 75.3 | ByT5 XXL |
| Question Answering | MLQA | EM | 54.9 | ByT5 XXL |
| Question Answering | MLQA | F1 | 71.6 | ByT5 XXL |
| Natural Language Inference | XNLI | Accuracy | 83.7 | ByT5 XXL |
| Natural Language Inference | XNLI | Accuracy | 69.1 | ByT5 Small |
| Cross-Lingual | WikiAnn NER | F1 | 67.7 | ByT5 XXL |
| Cross-Lingual Transfer | WikiAnn NER | F1 | 67.7 | ByT5 XXL |
| Extreme Summarization | GEM-XSum | BLEU score | 15.3 | ByT5 |
| Extreme Summarization | GEM-XSum | BLEU score | 14.3 | mT5 |