TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Charformer: Fast Character Transformers via Gradient-based...

Charformer: Fast Character Transformers via Gradient-based Subword Tokenization

Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, Donald Metzler

2021-06-23ICLR 2022 4Paraphrase IdentificationSentiment AnalysisNatural Language InferenceSemantic Textual SimilarityLinguistic Acceptability
PaperPDFCode(official)Code

Abstract

State-of-the-art models in natural language processing rely on separate rigid subword tokenization algorithms, which limit their generalization ability and adaptation to new settings. In this paper, we propose a new model inductive bias that learns a subword tokenization end-to-end as part of the model. To this end, we introduce a soft gradient-based subword tokenization module (GBST) that automatically learns latent subword representations from characters in a data-driven fashion. Concretely, GBST enumerates candidate subword blocks and learns to score them in a position-wise fashion using a block scoring network. We additionally introduce Charformer, a deep Transformer model that integrates GBST and operates on the byte level. Via extensive experiments on English GLUE, multilingual, and noisy text datasets, we show that Charformer outperforms a series of competitive byte-level baselines while generally performing on par and sometimes outperforming subword-based models. Additionally, Charformer is fast, improving the speed of both vanilla byte-level and subword-level Transformers by 28%-100% while maintaining competitive quality. We believe this work paves the way for highly performant token-free models that are trained completely end-to-end.

Results

TaskDatasetMetricValueModel
Natural Language InferenceMultiNLIMatched83.7Charformer-Tall
Natural Language InferenceMultiNLIMismatched84.4Charformer-Tall
Semantic Textual SimilarityMRPCF191.4Charformer-Tall
Semantic Textual SimilaritySTS BenchmarkPearson Correlation0.873Charformer-Tall
Semantic Textual SimilarityQuora Question PairsAccuracy91.4Charformer-Tall
Semantic Textual SimilarityQuora Question PairsF188.5Charformer-Tall
Sentiment AnalysisSST-2 Binary classificationAccuracy91.6Charformer-Base
Paraphrase IdentificationQuora Question PairsAccuracy91.4Charformer-Tall
Paraphrase IdentificationQuora Question PairsF188.5Charformer-Tall

Related Papers

AdaptiSent: Context-Aware Adaptive Attention for Multimodal Aspect-Based Sentiment Analysis2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles2025-07-15DCR: Quantifying Data Contamination in LLMs Evaluation2025-07-15LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15SentiDrop: A Multi Modal Machine Learning model for Predicting Dropout in Distance Learning2025-07-14GNN-CNN: An Efficient Hybrid Model of Convolutional and Graph Neural Networks for Text Representation2025-07-10DS@GT at CheckThat! 2025: Evaluating Context and Tokenization Strategies for Numerical Fact Verification2025-07-08