TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HybridNets: End-to-End Perception Network

HybridNets: End-to-End Perception Network

Dat Vu, Bao Ngo, Hung Phan

2022-03-17Drivable Area DetectionTraffic Object DetectionAutonomous Drivingobject-detectionObject DetectionLane Detection
PaperPDFCodeCode(official)Code

Abstract

End-to-end Network has become increasingly important in multi-tasking. One prominent example of this is the growing significance of a driving perception system in autonomous driving. This paper systematically studies an end-to-end perception network for multi-tasking and proposes several key optimizations to improve accuracy. First, the paper proposes efficient segmentation head and box/class prediction networks based on weighted bidirectional feature network. Second, the paper proposes automatically customized anchor for each level in the weighted bidirectional feature network. Third, the paper proposes an efficient training loss function and training strategy to balance and optimize network. Based on these optimizations, we have developed an end-to-end perception network to perform multi-tasking, including traffic object detection, drivable area segmentation and lane detection simultaneously, called HybridNets, which achieves better accuracy than prior art. In particular, HybridNets achieves 77.3 mean Average Precision on Berkeley DeepDrive Dataset, outperforms lane detection with 31.6 mean Intersection Over Union with 12.83 million parameters and 15.6 billion floating-point operations. In addition, it can perform visual perception tasks in real-time and thus is a practical and accurate solution to the multi-tasking problem. Code is available at https://github.com/datvuthanh/HybridNets.

Results

TaskDatasetMetricValueModel
Autonomous VehiclesBDD100K valAccuracy (%)85.4HybridNets
Autonomous VehiclesBDD100K valIoU (%)31.6HybridNets
Autonomous VehiclesBDD100K valParams (M)12.8HybridNets
Drivable Area DetectionBDD100K valParams (M)12.8HybridNets
Drivable Area DetectionBDD100K valmIoU90.5HybridNets
Lane DetectionBDD100K valAccuracy (%)85.4HybridNets
Lane DetectionBDD100K valIoU (%)31.6HybridNets
Lane DetectionBDD100K valParams (M)12.8HybridNets
2D Object DetectionBDD100K valParams (M)12.8HybridNets
2D Object DetectionBDD100K valmIoU90.5HybridNets

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17