TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Generalized Focal Loss: Learning Qualified and Distributed...

Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection

Xiang Li, Wenhai Wang, Lijun Wu, Shuo Chen, Xiaolin Hu, Jun Li, Jinhui Tang, Jian Yang

2020-06-08NeurIPS 2020 12Dense Object DetectionGeneral ClassificationObject Detection
PaperPDFCodeCodeCodeCodeCodeCodeCode(official)

Abstract

One-stage detector basically formulates object detection as dense classification and localization. The classification is usually optimized by Focal Loss and the box location is commonly learned under Dirac delta distribution. A recent trend for one-stage detectors is to introduce an individual prediction branch to estimate the quality of localization, where the predicted quality facilitates the classification to improve detection performance. This paper delves into the representations of the above three fundamental elements: quality estimation, classification and localization. Two problems are discovered in existing practices, including (1) the inconsistent usage of the quality estimation and classification between training and inference and (2) the inflexible Dirac delta distribution for localization when there is ambiguity and uncertainty in complex scenes. To address the problems, we design new representations for these elements. Specifically, we merge the quality estimation into the class prediction vector to form a joint representation of localization quality and classification, and use a vector to represent arbitrary distribution of box locations. The improved representations eliminate the inconsistency risk and accurately depict the flexible distribution in real data, but contain continuous labels, which is beyond the scope of Focal Loss. We then propose Generalized Focal Loss (GFL) that generalizes Focal Loss from its discrete form to the continuous version for successful optimization. On COCO test-dev, GFL achieves 45.0\% AP using ResNet-101 backbone, surpassing state-of-the-art SAPD (43.5\%) and ATSS (43.6\%) with higher or comparable inference speed, under the same backbone and training settings. Notably, our best model can achieve a single-model single-scale AP of 48.2\%, at 10 FPS on a single 2080Ti GPU. Code and models are available at https://github.com/implus/GFocal.

Results

TaskDatasetMetricValueModel
Object DetectionCOCO test-devAP5067.4GFL (X-101-32x4d-DCN, single-scale)
Object DetectionCOCO test-devAP7552.6GFL (X-101-32x4d-DCN, single-scale)
Object DetectionCOCO test-devAPL60.2GFL (X-101-32x4d-DCN, single-scale)
Object DetectionCOCO test-devAPM51.7GFL (X-101-32x4d-DCN, single-scale)
Object DetectionCOCO test-devAPS29.2GFL (X-101-32x4d-DCN, single-scale)
Object DetectionCOCO test-devbox mAP48.2GFL (X-101-32x4d-DCN, single-scale)
3DCOCO test-devAP5067.4GFL (X-101-32x4d-DCN, single-scale)
3DCOCO test-devAP7552.6GFL (X-101-32x4d-DCN, single-scale)
3DCOCO test-devAPL60.2GFL (X-101-32x4d-DCN, single-scale)
3DCOCO test-devAPM51.7GFL (X-101-32x4d-DCN, single-scale)
3DCOCO test-devAPS29.2GFL (X-101-32x4d-DCN, single-scale)
3DCOCO test-devbox mAP48.2GFL (X-101-32x4d-DCN, single-scale)
2D ClassificationCOCO test-devAP5067.4GFL (X-101-32x4d-DCN, single-scale)
2D ClassificationCOCO test-devAP7552.6GFL (X-101-32x4d-DCN, single-scale)
2D ClassificationCOCO test-devAPL60.2GFL (X-101-32x4d-DCN, single-scale)
2D ClassificationCOCO test-devAPM51.7GFL (X-101-32x4d-DCN, single-scale)
2D ClassificationCOCO test-devAPS29.2GFL (X-101-32x4d-DCN, single-scale)
2D ClassificationCOCO test-devbox mAP48.2GFL (X-101-32x4d-DCN, single-scale)
2D Object DetectionCOCO test-devAP5067.4GFL (X-101-32x4d-DCN, single-scale)
2D Object DetectionCOCO test-devAP7552.6GFL (X-101-32x4d-DCN, single-scale)
2D Object DetectionCOCO test-devAPL60.2GFL (X-101-32x4d-DCN, single-scale)
2D Object DetectionCOCO test-devAPM51.7GFL (X-101-32x4d-DCN, single-scale)
2D Object DetectionCOCO test-devAPS29.2GFL (X-101-32x4d-DCN, single-scale)
2D Object DetectionCOCO test-devbox mAP48.2GFL (X-101-32x4d-DCN, single-scale)
16kCOCO test-devAP5067.4GFL (X-101-32x4d-DCN, single-scale)
16kCOCO test-devAP7552.6GFL (X-101-32x4d-DCN, single-scale)
16kCOCO test-devAPL60.2GFL (X-101-32x4d-DCN, single-scale)
16kCOCO test-devAPM51.7GFL (X-101-32x4d-DCN, single-scale)
16kCOCO test-devAPS29.2GFL (X-101-32x4d-DCN, single-scale)
16kCOCO test-devbox mAP48.2GFL (X-101-32x4d-DCN, single-scale)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15ECORE: Energy-Conscious Optimized Routing for Deep Learning Models at the Edge2025-07-08Beyond One Shot, Beyond One Perspective: Cross-View and Long-Horizon Distillation for Better LiDAR Representations2025-07-07