TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/From Open Set to Closed Set: Counting Objects by Spatial D...

From Open Set to Closed Set: Counting Objects by Spatial Divide-and-Conquer

Haipeng Xiong, Hao Lu, Chengxin Liu, Liang Liu, Zhiguo Cao, Chunhua Shen

2019-08-15ICCV 2019 10Crowd Counting
PaperPDFCodeCodeCode(official)CodeCode

Abstract

Visual counting, a task that predicts the number of objects from an image/video, is an open-set problem by nature, i.e., the number of population can vary in $[0,+\infty)$ in theory. However, the collected images and labeled count values are limited in reality, which means only a small closed set is observed. Existing methods typically model this task in a regression manner, while they are likely to suffer from an unseen scene with counts out of the scope of the closed set. In fact, counting is decomposable. A dense region can always be divided until sub-region counts are within the previously observed closed set. Inspired by this idea, we propose a simple but effective approach, Spatial Divide-and- Conquer Network (S-DCNet). S-DCNet only learns from a closed set but can generalize well to open-set scenarios via S-DC. S-DCNet is also efficient. To avoid repeatedly computing sub-region convolutional features, S-DC is executed on the feature map instead of on the input image. S-DCNet achieves the state-of-the-art performance on three crowd counting datasets (ShanghaiTech, UCF_CC_50 and UCF-QNRF), a vehicle counting dataset (TRANCOS) and a plant counting dataset (MTC). Compared to the previous best methods, S-DCNet brings a 20.2% relative improvement on the ShanghaiTech Part B, 20.9% on the UCF-QNRF, 22.5% on the TRANCOS and 15.1% on the MTC. Code has been made available at: https://github. com/xhp-hust-2018-2011/S-DCNet.

Results

TaskDatasetMetricValueModel
CrowdsShanghaiTech BMAE6.7S-DCNet
CrowdsTRANCOSMAE2.92S-DCNet
CrowdsShanghaiTech AMAE58.3S-DCNet

Related Papers

Car Object Counting and Position Estimation via Extension of the CLIP-EBC Framework2025-07-11EBC-ZIP: Improving Blockwise Crowd Counting with Zero-Inflated Poisson Regression2025-06-24Point-to-Region Loss for Semi-Supervised Point-Based Crowd Counting2025-05-28Crowd Scene Analysis using Deep Learning Techniques2025-05-13Transformer-Based Dual-Optical Attention Fusion Crowd Head Point Counting and Localization Network2025-05-11A Short Overview of Multi-Modal Wi-Fi Sensing2025-05-10Adept: Annotation-Denoising Auxiliary Tasks with Discrete Cosine Transform Map and Keypoint for Human-Centric Pretraining2025-04-29ProgRoCC: A Progressive Approach to Rough Crowd Counting2025-04-18