TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DAN: a Segmentation-free Document Attention Network for Ha...

DAN: a Segmentation-free Document Attention Network for Handwritten Document Recognition

Denis Coquenet, Clément Chatelain, Thierry Paquet

2022-03-23Handwritten Document RecognitionHandwritten Text RecognitionSegmentation
PaperPDFCode(official)

Abstract

Unconstrained handwritten text recognition is a challenging computer vision task. It is traditionally handled by a two-step approach, combining line segmentation followed by text line recognition. For the first time, we propose an end-to-end segmentation-free architecture for the task of handwritten document recognition: the Document Attention Network. In addition to text recognition, the model is trained to label text parts using begin and end tags in an XML-like fashion. This model is made up of an FCN encoder for feature extraction and a stack of transformer decoder layers for a recurrent token-by-token prediction process. It takes whole text documents as input and sequentially outputs characters, as well as logical layout tokens. Contrary to the existing segmentation-based approaches, the model is trained without using any segmentation label. We achieve competitive results on the READ 2016 dataset at page level, as well as double-page level with a CER of 3.43% and 3.70%, respectively. We also provide results for the RIMES 2009 dataset at page level, reaching 4.54% of CER. We provide all source code and pre-trained model weights at https://github.com/FactoDeepLearning/DAN.

Results

TaskDatasetMetricValueModel
Optical Character Recognition (OCR)READ 2016CER (%)3.22DAN
Optical Character Recognition (OCR)READ 2016WER (%)13.63DAN
Optical Character Recognition (OCR)READ2016(line-level)Test CER4.1DAN
Optical Character Recognition (OCR)READ2016(line-level)Test WER17.6DAN
Handwritten Text RecognitionREAD 2016CER (%)3.22DAN
Handwritten Text RecognitionREAD 2016WER (%)13.63DAN
Handwritten Text RecognitionREAD2016(line-level)Test CER4.1DAN
Handwritten Text RecognitionREAD2016(line-level)Test WER17.6DAN

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17