TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-branch and Multi-scale Attention Learning for Fine-G...

Multi-branch and Multi-scale Attention Learning for Fine-Grained Visual Categorization

Fan Zhang, Meng Li, Guisheng Zhai, Yizhao Liu

2020-03-20Fine-Grained Visual CategorizationFine-Grained Image RecognitionObject RecognitionFine-Grained Image Classification
PaperPDFCodeCodeCodeCode(official)Code(official)Code

Abstract

ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is one of the most authoritative academic competitions in the field of Computer Vision (CV) in recent years. But applying ILSVRC's annual champion directly to fine-grained visual categorization (FGVC) tasks does not achieve good performance. To FGVC tasks, the small inter-class variations and the large intra-class variations make it a challenging problem. Our attention object location module (AOLM) can predict the position of the object and attention part proposal module (APPM) can propose informative part regions without the need of bounding-box or part annotations. The obtained object images not only contain almost the entire structure of the object, but also contains more details, part images have many different scales and more fine-grained features, and the raw images contain the complete object. The three kinds of training images are supervised by our multi-branch network. Therefore, our multi-branch and multi-scale learning network(MMAL-Net) has good classification ability and robustness for images of different scales. Our approach can be trained end-to-end, while provides short inference time. Through the comprehensive experiments demonstrate that our approach can achieves state-of-the-art results on CUB-200-2011, FGVC-Aircraft and Stanford Cars datasets. Our code will be available at https://github.com/ZF1044404254/MMAL-Net

Results

TaskDatasetMetricValueModel
Image ClassificationCUB-200-2011Accuracy89.6TBMSL-Net
Fine-Grained Image ClassificationCUB-200-2011Accuracy89.6TBMSL-Net

Related Papers

GeoMag: A Vision-Language Model for Pixel-level Fine-Grained Remote Sensing Image Parsing2025-07-08Out-of-distribution detection in 3D applications: a review2025-07-01Hierarchical Mask-Enhanced Dual Reconstruction Network for Few-Shot Fine-Grained Image Classification2025-06-25SASep: Saliency-Aware Structured Separation of Geometry and Feature for Open Set Learning on Point Clouds2025-06-16Structural feature enhanced transformer for fine-grained image recognition2025-06-14GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers2025-06-13Continual Hyperbolic Learning of Instances and Classes2025-06-12DCIRNet: Depth Completion with Iterative Refinement for Dexterous Grasping of Transparent and Reflective Objects2025-06-11