TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Rethinking Spatial Invariance of Convolutional Networks fo...

Rethinking Spatial Invariance of Convolutional Networks for Object Counting

Zhi-Qi Cheng, Qi Dai, Hong Li, Jingkuan Song, Xiao Wu, Alexander G. Hauptmann

2022-06-10CVPR 2022 1Crowd CountingObject Counting
PaperPDFCode(official)

Abstract

Previous work generally believes that improving the spatial invariance of convolutional networks is the key to object counting. However, after verifying several mainstream counting networks, we surprisingly found too strict pixel-level spatial invariance would cause overfit noise in the density map generation. In this paper, we try to use locally connected Gaussian kernels to replace the original convolution filter to estimate the spatial position in the density map. The purpose of this is to allow the feature extraction process to potentially stimulate the density map generation process to overcome the annotation noise. Inspired by previous work, we propose a low-rank approximation accompanied with translation invariance to favorably implement the approximation of massive Gaussian convolution. Our work points a new direction for follow-up research, which should investigate how to properly relax the overly strict pixel-level spatial invariance for object counting. We evaluate our methods on 4 mainstream object counting networks (i.e., MCNN, CSRNet, SANet, and ResNet-50). Extensive experiments were conducted on 7 popular benchmarks for 3 applications (i.e., crowd, vehicle, and plant counting). Experimental results show that our methods significantly outperform other state-of-the-art methods and achieve promising learning of the spatial position of objects.

Results

TaskDatasetMetricValueModel
CrowdsShanghaiTech BMAE6GauNet (ResNet-50)
CrowdsUCF-QNRFMAE81.6GauNet (ResNet-50)
CrowdsShanghaiTech AMAE54.8GauNet (ResNet-50)
CrowdsShanghaiTech AMSE89.1GauNet (ResNet-50)
CrowdsJHU-CROWD++MAE58.2GauNet (ResNet-50)
CrowdsJHU-CROWD++MSE245.1GauNet (ResNet-50)
CrowdsUCF CC 50MAE186.3GauNet (ResNet-50)
Object CountingTRANCOSMAE2.1GauNet (ResNet-50)
Object CountingTRANCOSMSE2.6GauNet (ResNet-50)

Related Papers

Car Object Counting and Position Estimation via Extension of the CLIP-EBC Framework2025-07-11EBC-ZIP: Improving Blockwise Crowd Counting with Zero-Inflated Poisson Regression2025-06-24OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models2025-06-03Point-to-Region Loss for Semi-Supervised Point-Based Crowd Counting2025-05-28Improving Contrastive Learning for Referring Expression Counting2025-05-28InstructSAM: A Training-Free Framework for Instruction-Oriented Remote Sensing Object Recognition2025-05-21Expanding Zero-Shot Object Counting with Rich Prompts2025-05-21VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning2025-05-17