TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DPText-DETR: Towards Better Scene Text Detection with Dyna...

DPText-DETR: Towards Better Scene Text Detection with Dynamic Points in Transformer

Maoyuan Ye, Jing Zhang, Shanshan Zhao, Juhua Liu, Bo Du, DaCheng Tao

2022-07-10Scene Text DetectionFormText Detection
PaperPDFCode(official)

Abstract

Recently, Transformer-based methods, which predict polygon points or Bezier curve control points for localizing texts, are popular in scene text detection. However, these methods built upon detection transformer framework might achieve sub-optimal training efficiency and performance due to coarse positional query modeling.In addition, the point label form exploited in previous works implies the reading order of humans, which impedes the detection robustness from our observation. To address these challenges, this paper proposes a concise Dynamic Point Text DEtection TRansformer network, termed DPText-DETR. In detail, DPText-DETR directly leverages explicit point coordinates to generate position queries and dynamically updates them in a progressive way. Moreover, to improve the spatial inductive bias of non-local self-attention in Transformer, we present an Enhanced Factorized Self-Attention module which provides point queries within each instance with circular shape guidance. Furthermore, we design a simple yet effective positional label form to tackle the side effect of the previous form. To further evaluate the impact of different label forms on the detection robustness in real-world scenario, we establish an Inverse-Text test set containing 500 manually labeled images. Extensive experiments prove the high training efficiency, robustness, and state-of-the-art performance of our method on popular benchmarks. The code and the Inverse-Text test set are available at https://github.com/ymy-k/DPText-DETR.

Results

TaskDatasetMetricValueModel
Scene Text DetectionTotal-TextFPS17DPText-DETR (ResNet-50)
Scene Text DetectionSCUT-CTW1500F-Measure88.8DPText-DETR (ResNet50)
Scene Text DetectionSCUT-CTW1500Precision91.7DPText-DETR (ResNet50)
Scene Text DetectionSCUT-CTW1500Recall86.2DPText-DETR (ResNet50)
Scene Text DetectionIC19-ArtH-Mean78.1DPText-DETR (ResNet-50)

Related Papers

FreeAudio: Training-Free Timing Planning for Controllable Long-Form Text-to-Audio Generation2025-07-11AI Generated Text Detection Using Instruction Fine-tuned Large Language and Transformer-Based Models2025-07-07Controlled Retrieval-augmented Context Evaluation for Long-form RAG2025-06-24PhantomHunter: Detecting Unseen Privately-Tuned LLM-Generated Text via Family-Aware Learning2025-06-18FormGym: Doing Paperwork with Agents2025-06-17FreeQ-Graph: Free-form Querying with Semantic Consistent Scene Graph for 3D Scene Understanding2025-06-16Direct Reasoning Optimization: LLMs Can Reward And Refine Their Own Reasoning for Open-Ended Tasks2025-06-16ARGUS: Hallucination and Omission Evaluation in Video-LLMs2025-06-09