TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/TwinLiteNet: An Efficient and Lightweight Model for Drivea...

TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and Lane Segmentation in Self-Driving Cars

Quang Huy Che, Dinh Phuc Nguyen, Minh Quan Pham, Duc Khai Lam

2023-07-20Autonomous VehiclesDrivable Area DetectionSegmentationAutonomous DrivingSemantic SegmentationLane DetectionSelf-Driving Cars
PaperPDFCode(official)

Abstract

Semantic segmentation is a common task in autonomous driving to understand the surrounding environment. Driveable Area Segmentation and Lane Detection are particularly important for safe and efficient navigation on the road. However, original semantic segmentation models are computationally expensive and require high-end hardware, which is not feasible for embedded systems in autonomous vehicles. This paper proposes a lightweight model for the driveable area and lane line segmentation. TwinLiteNet is designed cheaply but achieves accurate and efficient segmentation results. We evaluate TwinLiteNet on the BDD100K dataset and compare it with modern models. Experimental results show that our TwinLiteNet performs similarly to existing approaches, requiring significantly fewer computational resources. Specifically, TwinLiteNet achieves a mIoU score of 91.3% for the Drivable Area task and 31.08% IoU for the Lane Detection task with only 0.4 million parameters and achieves 415 FPS on GPU RTX A5000. Furthermore, TwinLiteNet can run in real-time on embedded devices with limited computing power, especially since it achieves 60FPS on Jetson Xavier NX, making it an ideal solution for self-driving vehicles. Code is available: url{https://github.com/chequanghuy/TwinLiteNet}.

Results

TaskDatasetMetricValueModel
Autonomous VehiclesBDD100K valAccuracy (%)77.8TwinLiteNet
Autonomous VehiclesBDD100K valIoU (%)31.08TwinLiteNet
Autonomous VehiclesBDD100K valParams (M)0.43TwinLiteNet
Drivable Area DetectionBDD100K valParams (M)0.43TwinLiteNet
Drivable Area DetectionBDD100K valmIoU91.3TwinLiteNet
Lane DetectionBDD100K valAccuracy (%)77.8TwinLiteNet
Lane DetectionBDD100K valIoU (%)31.08TwinLiteNet
Lane DetectionBDD100K valParams (M)0.43TwinLiteNet
2D Object DetectionBDD100K valParams (M)0.43TwinLiteNet
2D Object DetectionBDD100K valmIoU91.3TwinLiteNet

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17