TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/TwinLiteNetPlus: A Stronger Model for Real-time Drivable A...

TwinLiteNetPlus: A Stronger Model for Real-time Drivable Area and Lane Segmentation

Quang-Huy Che, Duc-Tri Le, Minh-Quan Pham, Vinh-Tiep Nguyen, Duc-Khai Lam

2024-03-25Drivable Area DetectionSegmentationAutonomous DrivingSemantic SegmentationLane Detection
PaperPDFCode(official)Code

Abstract

Semantic segmentation is crucial for autonomous driving, particularly for Drivable Area and Lane Segmentation, ensuring safety and navigation. To address the high computational costs of current state-of-the-art (SOTA) models, this paper introduces TwinLiteNetPlus (TwinLiteNet$^+$), a model adept at balancing efficiency and accuracy. TwinLiteNet$^+$ incorporates standard and depth-wise separable dilated convolutions, reducing complexity while maintaining high accuracy. It is available in four configurations, from the robust 1.94 million-parameter TwinLiteNet$^+_{\text{Large}}$ to the ultra-compact 34K-parameter TwinLiteNet$^+_{\text{Nano}}$. Notably, TwinLiteNet$^+_{\text{Large}}$ attains a 92.9\% mIoU for Drivable Area Segmentation and a 34.2\% IoU for Lane Segmentation. These results notably outperform those of current SOTA models while requiring a computational cost that is approximately 11 times lower in terms of Floating Point Operations (FLOPs) compared to the existing SOTA model. Extensively tested on various embedded devices, TwinLiteNet$^+$ demonstrates promising latency and power efficiency, underscoring its suitability for real-world autonomous vehicle applications.

Results

TaskDatasetMetricValueModel
Autonomous VehiclesBDD100K valAccuracy (%)81.9TwinLiteNetPlus-Large
Autonomous VehiclesBDD100K valIoU (%)34.2TwinLiteNetPlus-Large
Autonomous VehiclesBDD100K valParams (M)1.94TwinLiteNetPlus-Large
Autonomous VehiclesBDD100K valAccuracy (%)79.1TwinLiteNetPlus-Medium
Autonomous VehiclesBDD100K valIoU (%)32.3TwinLiteNetPlus-Medium
Autonomous VehiclesBDD100K valParams (M)0.48TwinLiteNetPlus-Medium
Autonomous VehiclesBDD100K valAccuracy (%)75.8TwinLiteNetPlus-Small
Autonomous VehiclesBDD100K valIoU (%)29.3TwinLiteNetPlus-Small
Autonomous VehiclesBDD100K valParams (M)0.12TwinLiteNetPlus-Small
Autonomous VehiclesBDD100K valAccuracy (%)70.2TwinLiteNetPlus-Nano
Autonomous VehiclesBDD100K valIoU (%)23.3TwinLiteNetPlus-Nano
Autonomous VehiclesBDD100K valParams (M)0.03TwinLiteNetPlus-Nano
Drivable Area DetectionBDD100K valParams (M)1.94TwinLiteNetPlus-Large
Drivable Area DetectionBDD100K valmIoU92.9TwinLiteNetPlus-Large
Drivable Area DetectionBDD100K valParams (M)0.48TwinLiteNetPlus-Medium
Drivable Area DetectionBDD100K valmIoU92TwinLiteNetPlus-Medium
Drivable Area DetectionBDD100K valParams (M)0.12TwinLiteNetPlus-Small
Drivable Area DetectionBDD100K valmIoU90.6TwinLiteNetPlus-Small
Drivable Area DetectionBDD100K valParams (M)0.03TwinLiteNetPlus-Nano
Drivable Area DetectionBDD100K valmIoU87.3TwinLiteNetPlus-Nano
Lane DetectionBDD100K valAccuracy (%)81.9TwinLiteNetPlus-Large
Lane DetectionBDD100K valIoU (%)34.2TwinLiteNetPlus-Large
Lane DetectionBDD100K valParams (M)1.94TwinLiteNetPlus-Large
Lane DetectionBDD100K valAccuracy (%)79.1TwinLiteNetPlus-Medium
Lane DetectionBDD100K valIoU (%)32.3TwinLiteNetPlus-Medium
Lane DetectionBDD100K valParams (M)0.48TwinLiteNetPlus-Medium
Lane DetectionBDD100K valAccuracy (%)75.8TwinLiteNetPlus-Small
Lane DetectionBDD100K valIoU (%)29.3TwinLiteNetPlus-Small
Lane DetectionBDD100K valParams (M)0.12TwinLiteNetPlus-Small
Lane DetectionBDD100K valAccuracy (%)70.2TwinLiteNetPlus-Nano
Lane DetectionBDD100K valIoU (%)23.3TwinLiteNetPlus-Nano
Lane DetectionBDD100K valParams (M)0.03TwinLiteNetPlus-Nano
2D Object DetectionBDD100K valParams (M)1.94TwinLiteNetPlus-Large
2D Object DetectionBDD100K valmIoU92.9TwinLiteNetPlus-Large
2D Object DetectionBDD100K valParams (M)0.48TwinLiteNetPlus-Medium
2D Object DetectionBDD100K valmIoU92TwinLiteNetPlus-Medium
2D Object DetectionBDD100K valParams (M)0.12TwinLiteNetPlus-Small
2D Object DetectionBDD100K valmIoU90.6TwinLiteNetPlus-Small
2D Object DetectionBDD100K valParams (M)0.03TwinLiteNetPlus-Nano
2D Object DetectionBDD100K valmIoU87.3TwinLiteNetPlus-Nano

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17