TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Contrastive Learning for Lane Detection via cross-similarity

Contrastive Learning for Lane Detection via cross-similarity

Ali Zoljodi, Sadegh Abadijou, Mina Alibeigi, Masoud Daneshtalab

2023-08-16Self-Supervised LearningContrastive LearningLane Detection
PaperPDFCodeCode

Abstract

Detecting lane markings in road scenes poses a challenge due to their intricate nature, which is susceptible to unfavorable conditions. While lane markings have strong shape priors, their visibility is easily compromised by lighting conditions, occlusions by other vehicles or pedestrians, and fading of colors over time. The detection process is further complicated by the presence of several lane shapes and natural variations, necessitating large amounts of data to train a robust lane detection model capable of handling various scenarios. In this paper, we present a novel self-supervised learning method termed Contrastive Learning for Lane Detection via cross-similarity (CLLD) to enhance the resilience of lane detection models in real-world scenarios, particularly when the visibility of lanes is compromised. CLLD introduces a contrastive learning (CL) method that assesses the similarity of local features within the global context of the input image. It uses the surrounding information to predict lane markings. This is achieved by integrating local feature contrastive learning with our proposed cross-similar operation. The local feature CL concentrates on extracting features from small patches, a necessity for accurately localizing lane segments. Meanwhile, cross-similarity captures global features, enabling the detection of obscured lane segments based on their surroundings. We enhance cross-similarity by randomly masking portions of input images in the process of augmentation. Extensive experiments on TuSimple and CuLane benchmarks demonstrate that CLLD outperforms SOTA contrastive learning methods, particularly in visibility-impairing conditions like shadows, while it also delivers comparable results under normal conditions. Compared to supervised learning, CLLD still excels in challenging scenarios such as shadows and crowded scenes, which are common in real-world driving.

Results

TaskDatasetMetricValueModel
Autonomous VehiclesCULaneF1 score79.27CLRNet - CLLD
Autonomous VehiclesCULaneF1 score76.26RESA - CLLD
Autonomous VehiclesCULaneF1 score70.56UNet - CLLD
Autonomous VehiclesTuSimpleAccuracy96.82CLLD
Lane DetectionCULaneF1 score79.27CLRNet - CLLD
Lane DetectionCULaneF1 score76.26RESA - CLLD
Lane DetectionCULaneF1 score70.56UNet - CLLD
Lane DetectionTuSimpleAccuracy96.82CLLD

Related Papers

A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation2025-07-15Latent Space Consistency for Sparse-View CT Reconstruction2025-07-15