TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Single Shot Self-Reliant Scene Text Spotter by Decoupled y...

Single Shot Self-Reliant Scene Text Spotter by Decoupled yet Collaborative Detection and Recognition

Jingjing Wu, Pengyuan Lyu, Guangming Lu, Chengquan Zhang, Wenjie Pei

2022-07-15Text SpottingText Detection
PaperPDFCode(official)

Abstract

Typical text spotters follow the two-stage spotting paradigm which detects the boundary for a text instance first and then performs text recognition within the detected regions. Despite the remarkable progress of such spotting paradigm, an important limitation is that the performance of text recognition depends heavily on the precision of text detection, resulting in the potential error propagation from detection to recognition. In this work, we propose the single shot Self-Reliant Scene Text Spotter v2 (SRSTS v2), which circumvents this limitation by decoupling recognition from detection while optimizing two tasks collaboratively. Specifically, our SRSTS v2 samples representative feature points around each potential text instance, and conducts both text detection and recognition in parallel guided by these sampled points. Thus, the text recognition is no longer dependent on detection, thereby alleviating the error propagation from detection to recognition. Moreover, the sampling module is learned under the supervision from both detection and recognition, which allows for the collaborative optimization and mutual enhancement between two tasks. Benefiting from such sampling-driven concurrent spotting framework, our approach is able to recognize the text instances correctly even if the precise text boundaries are challenging to detect. Extensive experiments on four benchmarks demonstrate that our method compares favorably to state-of-the-art spotters.

Results

TaskDatasetMetricValueModel
Text SpottingICDAR 2015F-measure (%) - Generic Lexicon74.5SRTS
Text SpottingICDAR 2015F-measure (%) - Strong Lexicon85.6SRTS
Text SpottingICDAR 2015F-measure (%) - Weak Lexicon81.7SRTS

Related Papers

AI Generated Text Detection Using Instruction Fine-tuned Large Language and Transformer-Based Models2025-07-07PhantomHunter: Detecting Unseen Privately-Tuned LLM-Generated Text via Family-Aware Learning2025-06-18Text-Aware Image Restoration with Diffusion Models2025-06-11Task-driven real-world super-resolution of document scans2025-06-08CL-ISR: A Contrastive Learning and Implicit Stance Reasoning Framework for Misleading Text Detection on Social Media2025-06-05Stress-testing Machine Generated Text Detection: Shifting Language Models Writing Style to Fool Detectors2025-05-30GoMatching++: Parameter- and Data-Efficient Arbitrary-Shaped Video Text Spotting and Benchmarking2025-05-28The Devil is in Fine-tuning and Long-tailed Problems:A New Benchmark for Scene Text Detection2025-05-21