TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/BatchFormer: Learning to Explore Sample Relationships for ...

BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning

Zhi Hou, Baosheng Yu, DaCheng Tao

2022-03-03CVPR 2022 1Zero-Shot Learning + Domain GeneralizationRepresentation LearningLong-tail LearningDomain GeneralizationContrastive LearningZero-Shot LearningCompositional Zero-Shot Learning
PaperPDFCode(official)

Abstract

Despite the success of deep neural networks, there are still many challenges in deep representation learning due to the data scarcity issues such as data imbalance, unseen distribution, and domain shift. To address the above-mentioned issues, a variety of methods have been devised to explore the sample relationships in a vanilla way (i.e., from the perspectives of either the input or the loss function), failing to explore the internal structure of deep neural networks for learning with sample relationships. Inspired by this, we propose to enable deep neural networks themselves with the ability to learn the sample relationships from each mini-batch. Specifically, we introduce a batch transformer module or BatchFormer, which is then applied into the batch dimension of each mini-batch to implicitly explore sample relationships during training. By doing this, the proposed method enables the collaboration of different samples, e.g., the head-class samples can also contribute to the learning of the tail classes for long-tailed recognition. Furthermore, to mitigate the gap between training and testing, we share the classifier between with or without the BatchFormer during training, which can thus be removed during testing. We perform extensive experiments on over ten datasets and the proposed method achieves significant improvements on different data scarcity applications without any bells and whistles, including the tasks of long-tailed recognition, compositional zero-shot learning, domain generalization, and contrastive learning. Code will be made publicly available at https://github.com/zhihou7/BatchFormer.

Results

TaskDatasetMetricValueModel
Domain AdaptationPACSAverage Accuracy88.6BatchFormer(ResNet-50, SWAD)
Image ClassificationImageNet-LTTop-1 Accuracy57.4BatchFormer(ResNet-50, PaCo)
Image ClassificationImageNet-LTTop-1 Accuracy55.7BatchFormer(ResNet-50, RIDE)
Image ClassificationCIFAR-100-LT (ρ=100)Error Rate47.6Paco + BatchFormer
Image ClassificationCIFAR-100-LT (ρ=100)Error Rate48.3Balanced + BatchFormer
Few-Shot Image ClassificationImageNet-LTTop-1 Accuracy57.4BatchFormer(ResNet-50, PaCo)
Few-Shot Image ClassificationImageNet-LTTop-1 Accuracy55.7BatchFormer(ResNet-50, RIDE)
Few-Shot Image ClassificationCIFAR-100-LT (ρ=100)Error Rate47.6Paco + BatchFormer
Few-Shot Image ClassificationCIFAR-100-LT (ρ=100)Error Rate48.3Balanced + BatchFormer
Generalized Few-Shot ClassificationImageNet-LTTop-1 Accuracy57.4BatchFormer(ResNet-50, PaCo)
Generalized Few-Shot ClassificationImageNet-LTTop-1 Accuracy55.7BatchFormer(ResNet-50, RIDE)
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=100)Error Rate47.6Paco + BatchFormer
Generalized Few-Shot ClassificationCIFAR-100-LT (ρ=100)Error Rate48.3Balanced + BatchFormer
Long-tail LearningImageNet-LTTop-1 Accuracy57.4BatchFormer(ResNet-50, PaCo)
Long-tail LearningImageNet-LTTop-1 Accuracy55.7BatchFormer(ResNet-50, RIDE)
Long-tail LearningCIFAR-100-LT (ρ=100)Error Rate47.6Paco + BatchFormer
Long-tail LearningCIFAR-100-LT (ρ=100)Error Rate48.3Balanced + BatchFormer
Generalized Few-Shot LearningImageNet-LTTop-1 Accuracy57.4BatchFormer(ResNet-50, PaCo)
Generalized Few-Shot LearningImageNet-LTTop-1 Accuracy55.7BatchFormer(ResNet-50, RIDE)
Generalized Few-Shot LearningCIFAR-100-LT (ρ=100)Error Rate47.6Paco + BatchFormer
Generalized Few-Shot LearningCIFAR-100-LT (ρ=100)Error Rate48.3Balanced + BatchFormer
Domain GeneralizationPACSAverage Accuracy88.6BatchFormer(ResNet-50, SWAD)

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17