TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SPTS v2: Single-Point Scene Text Spotting

SPTS v2: Single-Point Scene Text Spotting

Yuliang Liu, Jiaxin Zhang, Dezhi Peng, Mingxin Huang, Xinyu Wang, Jingqun Tang, Can Huang, Dahua Lin, Chunhua Shen, Xiang Bai, Lianwen Jin

2023-01-04Text SpottingText Detection
PaperPDFCode(official)Code(official)Code(official)

Abstract

End-to-end scene text spotting has made significant progress due to its intrinsic synergy between text detection and recognition. Previous methods commonly regard manual annotations such as horizontal rectangles, rotated rectangles, quadrangles, and polygons as a prerequisite, which are much more expensive than using single-point. Our new framework, SPTS v2, allows us to train high-performing text-spotting models using a single-point annotation. SPTS v2 reserves the advantage of the auto-regressive Transformer with an Instance Assignment Decoder (IAD) through sequentially predicting the center points of all text instances inside the same predicting sequence, while with a Parallel Recognition Decoder (PRD) for text recognition in parallel, which significantly reduces the requirement of the length of the sequence. These two decoders share the same parameters and are interactively connected with a simple but effective information transmission process to pass the gradient and information. Comprehensive experiments on various existing benchmark datasets demonstrate the SPTS v2 can outperform previous state-of-the-art single-point text spotters with fewer parameters while achieving 19$\times$ faster inference speed. Within the context of our SPTS v2 framework, our experiments suggest a potential preference for single-point representation in scene text spotting when compared to other representations. Such an attempt provides a significant opportunity for scene text spotting applications beyond the realms of existing paradigms. Code is available at: https://github.com/Yuliang-Liu/SPTSv2.

Results

TaskDatasetMetricValueModel
Text SpottingICDAR 2015F-measure (%) - Generic Lexicon72.6SPTS v2
Text SpottingICDAR 2015F-measure (%) - Strong Lexicon82.3SPTS v2
Text SpottingICDAR 2015F-measure (%) - Weak Lexicon77.7SPTS v2

Related Papers

AI Generated Text Detection Using Instruction Fine-tuned Large Language and Transformer-Based Models2025-07-07PhantomHunter: Detecting Unseen Privately-Tuned LLM-Generated Text via Family-Aware Learning2025-06-18Text-Aware Image Restoration with Diffusion Models2025-06-11Task-driven real-world super-resolution of document scans2025-06-08CL-ISR: A Contrastive Learning and Implicit Stance Reasoning Framework for Misleading Text Detection on Social Media2025-06-05Stress-testing Machine Generated Text Detection: Shifting Language Models Writing Style to Fool Detectors2025-05-30GoMatching++: Parameter- and Data-Efficient Arbitrary-Shaped Video Text Spotting and Benchmarking2025-05-28The Devil is in Fine-tuning and Long-tailed Problems:A New Benchmark for Scene Text Detection2025-05-21