TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Line Segment Detection Using Transformers without Edges

Line Segment Detection Using Transformers without Edges

Yifan Xu, Weijian Xu, David Cheung, Zhuowen Tu

2021-01-06CVPR 2021 1Line Segment DetectionMulti-Task Learning
PaperPDFCode(official)Code

Abstract

In this paper, we present a joint end-to-end line segment detection algorithm using Transformers that is post-processing and heuristics-guided intermediate processing (edge/junction/region detection) free. Our method, named LinE segment TRansformers (LETR), takes advantages of having integrated tokenized queries, a self-attention mechanism, and an encoding-decoding strategy within Transformers by skipping standard heuristic designs for the edge element detection and perceptual grouping processes. We equip Transformers with a multi-scale encoder/decoder strategy to perform fine-grained line segment detection under a direct endpoint distance loss. This loss term is particularly suitable for detecting geometric structures such as line segments that are not conveniently represented by the standard bounding box representations. The Transformers learn to gradually refine line segments through layers of self-attention. In our experiments, we show state-of-the-art results on Wireframe and YorkUrban benchmarks.

Results

TaskDatasetMetricValueModel
Transfer Learningwireframe datasetFH83.3LETR
Transfer Learningwireframe datasetsAP1065.2LETR
Transfer Learningwireframe datasetsAP1567.7LETR
Line Segment DetectionYork Urban DatasetFH66.9LETR
Line Segment DetectionYork Urban DatasetsAP1029.4LETR
Line Segment DetectionYork Urban DatasetsAP1531.7LETR
Multi-Task Learningwireframe datasetFH83.3LETR
Multi-Task Learningwireframe datasetsAP1065.2LETR
Multi-Task Learningwireframe datasetsAP1567.7LETR

Related Papers

SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Robust-Multi-Task Gradient Boosting2025-07-15SAMO: A Lightweight Sharpness-Aware Approach for Multi-Task Optimization with Joint Global-Local Perturbation2025-07-10Opportunistic Osteoporosis Diagnosis via Texture-Preserving Self-Supervision, Mixture of Experts and Multi-Task Integration2025-06-25AnchorDP3: 3D Affordance Guided Sparse Diffusion Policy for Robotic Manipulation2025-06-24An Audio-centric Multi-task Learning Framework for Streaming Ads Targeting on Spotify2025-06-23SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning2025-06-18Leader360V: The Large-scale, Real-world 360 Video Dataset for Multi-task Learning in Diverse Environment2025-06-17