Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, Donald Metzler
State-of-the-art models in natural language processing rely on separate rigid subword tokenization algorithms, which limit their generalization ability and adaptation to new settings. In this paper, we propose a new model inductive bias that learns a subword tokenization end-to-end as part of the model. To this end, we introduce a soft gradient-based subword tokenization module (GBST) that automatically learns latent subword representations from characters in a data-driven fashion. Concretely, GBST enumerates candidate subword blocks and learns to score them in a position-wise fashion using a block scoring network. We additionally introduce Charformer, a deep Transformer model that integrates GBST and operates on the byte level. Via extensive experiments on English GLUE, multilingual, and noisy text datasets, we show that Charformer outperforms a series of competitive byte-level baselines while generally performing on par and sometimes outperforming subword-based models. Additionally, Charformer is fast, improving the speed of both vanilla byte-level and subword-level Transformers by 28%-100% while maintaining competitive quality. We believe this work paves the way for highly performant token-free models that are trained completely end-to-end.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Natural Language Inference | MultiNLI | Matched | 83.7 | Charformer-Tall |
| Natural Language Inference | MultiNLI | Mismatched | 84.4 | Charformer-Tall |
| Semantic Textual Similarity | MRPC | F1 | 91.4 | Charformer-Tall |
| Semantic Textual Similarity | STS Benchmark | Pearson Correlation | 0.873 | Charformer-Tall |
| Semantic Textual Similarity | Quora Question Pairs | Accuracy | 91.4 | Charformer-Tall |
| Semantic Textual Similarity | Quora Question Pairs | F1 | 88.5 | Charformer-Tall |
| Sentiment Analysis | SST-2 Binary classification | Accuracy | 91.6 | Charformer-Base |
| Paraphrase Identification | Quora Question Pairs | Accuracy | 91.4 | Charformer-Tall |
| Paraphrase Identification | Quora Question Pairs | F1 | 88.5 | Charformer-Tall |