TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Weakly Supervised Object Localization via Transformer with...

Weakly Supervised Object Localization via Transformer with Implicit Spatial Calibration

Haotian Bai, Ruimao Zhang, Jiong Wang, Xiang Wan

2022-07-21Long-range modelingObject LocalizationWeakly-Supervised Object Localization
PaperPDFCodeCode(official)

Abstract

Weakly Supervised Object Localization (WSOL), which aims to localize objects by only using image-level labels, has attracted much attention because of its low annotation cost in real applications. Recent studies leverage the advantage of self-attention in visual Transformer for long-range dependency to re-active semantic regions, aiming to avoid partial activation in traditional class activation mapping (CAM). However, the long-range modeling in Transformer neglects the inherent spatial coherence of the object, and it usually diffuses the semantic-aware regions far from the object boundary, making localization results significantly larger or far smaller. To address such an issue, we introduce a simple yet effective Spatial Calibration Module (SCM) for accurate WSOL, incorporating semantic similarities of patch tokens and their spatial relationships into a unified diffusion model. Specifically, we introduce a learnable parameter to dynamically adjust the semantic correlations and spatial context intensities for effective information propagation. In practice, SCM is designed as an external module of Transformer, and can be removed during inference to reduce the computation cost. The object-sensitive localization ability is implicitly embedded into the Transformer encoder through optimization in the training phase. It enables the generated attention maps to capture the sharper object boundaries and filter the object-irrelevant background area. Extensive experimental results demonstrate the effectiveness of the proposed method, which significantly outperforms its counterpart TS-CAM on both CUB-200 and ImageNet-1K benchmarks. The code is available at https://github.com/164140757/SCM.

Results

TaskDatasetMetricValueModel
Object LocalizationImageNetGT-known localization accuracy68.8Deit-S
Object LocalizationImageNetTop-1 Localization Accuracy56.1Deit-S
Object LocalizationImageNetaverage top-1 classification accuracy76.7Deit-S
Object LocalizationCUB-200-2011GT-known localization accuracy96.6Deit-S
Object LocalizationCUB-200-2011Top-1 Localization Accuracy76.4Deit-S
Object LocalizationCUB-200-2011average top-1 classification accuracy78.5Deit-S

Related Papers

U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models2025-07-14MambaFusion: Height-Fidelity Dense Global Fusion for Multi-modal 3D Object Detection2025-07-06Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval2025-06-28VoteSplat: Hough Voting Gaussian Splatting for 3D Scene Understanding2025-06-28RAG-6DPose: Retrieval-Augmented 6D Pose Estimation via Leveraging CAD as Knowledge Base2025-06-23CDP: Towards Robust Autoregressive Visuomotor Policy Learning via Causal Diffusion2025-06-17UAV Object Detection and Positioning in a Mining Industrial Metaverse with Custom Geo-Referenced Data2025-06-16