TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LATR: 3D Lane Detection from Monocular Images with Transfo...

LATR: 3D Lane Detection from Monocular Images with Transformer

Yueru Luo, Chaoda Zheng, Xu Yan, Tang Kun, Chao Zheng, Shuguang Cui, Zhen Li

2023-08-08ICCV 2023 13D Lane DetectionAutonomous DrivingLane Detection
PaperPDFCode(official)

Abstract

3D lane detection from monocular images is a fundamental yet challenging task in autonomous driving. Recent advances primarily rely on structural 3D surrogates (e.g., bird's eye view) built from front-view image features and camera parameters. However, the depth ambiguity in monocular images inevitably causes misalignment between the constructed surrogate feature map and the original image, posing a great challenge for accurate lane detection. To address the above issue, we present a novel LATR model, an end-to-end 3D lane detector that uses 3D-aware front-view features without transformed view representation. Specifically, LATR detects 3D lanes via cross-attention based on query and key-value pairs, constructed using our lane-aware query generator and dynamic 3D ground positional embedding. On the one hand, each query is generated based on 2D lane-aware features and adopts a hybrid embedding to enhance lane information. On the other hand, 3D space information is injected as positional embedding from an iteratively-updated 3D ground plane. LATR outperforms previous state-of-the-art methods on both synthetic Apollo, realistic OpenLane and ONCE-3DLanes by large margins (e.g., 11.4 gain in terms of F1 score on OpenLane). Code will be released at https://github.com/JMoonr/LATR .

Results

TaskDatasetMetricValueModel
Autonomous VehiclesOpenLaneCurve68.2LATR
Autonomous VehiclesOpenLaneExtreme Weather57.1LATR
Autonomous VehiclesOpenLaneF1 (all)61.9LATR
Autonomous VehiclesOpenLaneIntersection52.3LATR
Autonomous VehiclesOpenLaneMerge & Split61.5LATR
Autonomous VehiclesOpenLaneNight55.4LATR
Autonomous VehiclesOpenLaneUp & Down55.2LATR
Lane DetectionOpenLaneCurve68.2LATR
Lane DetectionOpenLaneExtreme Weather57.1LATR
Lane DetectionOpenLaneF1 (all)61.9LATR
Lane DetectionOpenLaneIntersection52.3LATR
Lane DetectionOpenLaneMerge & Split61.5LATR
Lane DetectionOpenLaneNight55.4LATR
Lane DetectionOpenLaneUp & Down55.2LATR

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17Safeguarding Federated Learning-based Road Condition Classification2025-07-16Towards Autonomous Riding: A Review of Perception, Planning, and Control in Intelligent Two-Wheelers2025-07-16