TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-Branch Auxiliary Fusion YOLO with Re-parameterizatio...

Multi-Branch Auxiliary Fusion YOLO with Re-parameterization Heterogeneous Convolutional for accurate object detection

Zhiqiang Yang, Qiu Guan, Keer Zhao, Jianmin Yang, Xinli Xu, Haixia Long, Ying Tang

2024-07-05Real-Time Object DetectionNovel Object Detectionobject-detectionObject Detection
PaperPDFCode(official)

Abstract

Due to the effective performance of multi-scale feature fusion, Path Aggregation FPN (PAFPN) is widely employed in YOLO detectors. However, it cannot efficiently and adaptively integrate high-level semantic information with low-level spatial information simultaneously. We propose a new model named MAF-YOLO in this paper, which is a novel object detection framework with a versatile neck named Multi-Branch Auxiliary FPN (MAFPN). Within MAFPN, the Superficial Assisted Fusion (SAF) module is designed to combine the output of the backbone with the neck, preserving an optimal level of shallow information to facilitate subsequent learning. Meanwhile, the Advanced Assisted Fusion (AAF) module deeply embedded within the neck conveys a more diverse range of gradient information to the output layer. Furthermore, our proposed Re-parameterized Heterogeneous Efficient Layer Aggregation Network (RepHELAN) module ensures that both the overall model architecture and convolutional design embrace the utilization of heterogeneous large convolution kernels. Therefore, this guarantees the preservation of information related to small targets while simultaneously achieving the multi-scale receptive field. Finally, taking the nano version of MAF-YOLO for example, it can achieve 42.4% AP on COCO with only 3.76M learnable parameters and 10.51G FLOPs, and approximately outperforms YOLOv8n by about 5.1%. The source code of this work is available at: https://github.com/yang-0201/MAF-YOLO.

Results

TaskDatasetMetricValueModel
Object DetectionCOCO (Common Objects in Context)box AP51.2MAFYOLOm
Object DetectionCOCO (Common Objects in Context)box AP47.4MAFYOLOs
Object DetectionCOCO (Common Objects in Context)box AP42.4MAFYOLOn
3DCOCO (Common Objects in Context)box AP51.2MAFYOLOm
3DCOCO (Common Objects in Context)box AP47.4MAFYOLOs
3DCOCO (Common Objects in Context)box AP42.4MAFYOLOn
2D ClassificationCOCO (Common Objects in Context)box AP51.2MAFYOLOm
2D ClassificationCOCO (Common Objects in Context)box AP47.4MAFYOLOs
2D ClassificationCOCO (Common Objects in Context)box AP42.4MAFYOLOn
2D Object DetectionCOCO (Common Objects in Context)box AP51.2MAFYOLOm
2D Object DetectionCOCO (Common Objects in Context)box AP47.4MAFYOLOs
2D Object DetectionCOCO (Common Objects in Context)box AP42.4MAFYOLOn
16kCOCO (Common Objects in Context)box AP51.2MAFYOLOm
16kCOCO (Common Objects in Context)box AP47.4MAFYOLOs
16kCOCO (Common Objects in Context)box AP42.4MAFYOLOn

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15ECORE: Energy-Conscious Optimized Routing for Deep Learning Models at the Edge2025-07-08Beyond One Shot, Beyond One Perspective: Cross-View and Long-Horizon Distillation for Better LiDAR Representations2025-07-07