TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Pay Attention to What You Read: Non-recurrent Handwritten ...

Pay Attention to What You Read: Non-recurrent Handwritten Text-Line Recognition

Lei Kang, Pau Riba, Marçal Rusiñol, Alicia Fornés, Mauricio Villegas

2020-05-26Handwriting RecognitionFew-Shot LearningHandwritten Text Recognition
PaperPDF

Abstract

The advent of recurrent neural networks for handwriting recognition marked an important milestone reaching impressive recognition accuracies despite the great variability that we observe across different writing styles. Sequential architectures are a perfect fit to model text lines, not only because of the inherent temporal aspect of text, but also to learn probability distributions over sequences of characters and words. However, using such recurrent paradigms comes at a cost at training stage, since their sequential pipelines prevent parallelization. In this work, we introduce a non-recurrent approach to recognize handwritten text by the use of transformer models. We propose a novel method that bypasses any recurrence. By using multi-head self-attention layers both at the visual and textual stages, we are able to tackle character recognition as well as to learn language-related dependencies of the character sequences to be decoded. Our model is unconstrained to any predefined vocabulary, being able to recognize out-of-vocabulary words, i.e. words that do not appear in the training vocabulary. We significantly advance over prior art and demonstrate that satisfactory recognition accuracies are yielded even in few-shot learning scenarios.

Results

TaskDatasetMetricValueModel
Optical Character Recognition (OCR)IAMCER4.67Transformer w/ CNN (+synth)
Optical Character Recognition (OCR)IAMCER7.62Transformer w/ CNN
Handwritten Text RecognitionIAMCER4.67Transformer w/ CNN (+synth)
Handwritten Text RecognitionIAMCER7.62Transformer w/ CNN

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis2025-07-10Few-Shot Learning by Explicit Physics Integration: An Application to Groundwater Heat Transport2025-07-08Advancing Offline Handwritten Text Recognition: A Systematic Review of Data Augmentation and Generation Techniques2025-07-08ViRefSAM: Visual Reference-Guided Segment Anything Model for Remote Sensing Segmentation2025-07-03A Transformer Based Handwriting Recognition System Jointly Using Online and Offline Features2025-06-25Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications2025-06-25