Yuting Li, Dexiong Chen, Tinglong Tang, Xi Shen
We explore the application of Vision Transformer (ViT) for handwritten text recognition. The limited availability of labeled data in this domain poses challenges for achieving high performance solely relying on ViT. Previous transformer-based models required external data or extensive pre-training on large datasets to excel. To address this limitation, we introduce a data-efficient ViT method that uses only the encoder of the standard transformer. We find that incorporating a Convolutional Neural Network (CNN) for feature extraction instead of the original patch embedding and employ Sharpness-Aware Minimization (SAM) optimizer to ensure that the model can converge towards flatter minima and yield notable enhancements. Furthermore, our introduction of the span mask technique, which masks interconnected features in the feature map, acts as an effective regularizer. Empirically, our approach competes favorably with traditional CNN-based models on small datasets like IAM and READ2016. Additionally, it establishes a new benchmark on the LAM dataset, currently the largest dataset with 19,830 training text lines. The code is publicly available at: https://github.com/YutingLi0606/HTR-VT.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Optical Character Recognition (OCR) | READ 2016 | CER (%) | 3.9 | HTR-VT(line-level) |
| Optical Character Recognition (OCR) | READ 2016 | WER (%) | 16.5 | HTR-VT(line-level) |
| Optical Character Recognition (OCR) | IAM(line-level) | Test CER | 4.7 | HTR-VT |
| Optical Character Recognition (OCR) | IAM(line-level) | Test WER | 14.9 | HTR-VT |
| Optical Character Recognition (OCR) | IAM | CER | 4.7 | HTR-VT(line-level) |
| Optical Character Recognition (OCR) | IAM | WER | 14.9 | HTR-VT(line-level) |
| Optical Character Recognition (OCR) | READ2016(line-level) | Test CER | 3.9 | HTR-VT |
| Optical Character Recognition (OCR) | READ2016(line-level) | Test WER | 16.5 | HTR-VT |
| Optical Character Recognition (OCR) | LAM(line-level) | Test CER | 2.8 | HTR-VT |
| Optical Character Recognition (OCR) | LAM(line-level) | Test WER | 7.4 | HTR-VT |
| Handwritten Text Recognition | READ 2016 | CER (%) | 3.9 | HTR-VT(line-level) |
| Handwritten Text Recognition | READ 2016 | WER (%) | 16.5 | HTR-VT(line-level) |
| Handwritten Text Recognition | IAM(line-level) | Test CER | 4.7 | HTR-VT |
| Handwritten Text Recognition | IAM(line-level) | Test WER | 14.9 | HTR-VT |
| Handwritten Text Recognition | IAM | CER | 4.7 | HTR-VT(line-level) |
| Handwritten Text Recognition | IAM | WER | 14.9 | HTR-VT(line-level) |
| Handwritten Text Recognition | READ2016(line-level) | Test CER | 3.9 | HTR-VT |
| Handwritten Text Recognition | READ2016(line-level) | Test WER | 16.5 | HTR-VT |
| Handwritten Text Recognition | LAM(line-level) | Test CER | 2.8 | HTR-VT |
| Handwritten Text Recognition | LAM(line-level) | Test WER | 7.4 | HTR-VT |