TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Swin Transformer V2: Scaling Up Capacity and Resolution

Swin Transformer V2: Scaling Up Capacity and Resolution

Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo

2021-11-18CVPR 2022 1Image ClassificationAction ClassificationSemantic SegmentationInstance SegmentationObject Detection
PaperPDFCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536$\times$1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time. Code is available at \url{https://github.com/microsoft/Swin-Transformer}.

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@186.8Video-SwinV2-G (ImageNet-22k and external 70M pretrain)
Semantic SegmentationADE20KValidation mIoU59.9SwinV2-G(UperNet)
Semantic SegmentationADE20KValidation mIoU53.7SwinV2-G-HTC++ Liu et al. ([2021a])
Object DetectionCOCO test-devParams (M)3000SwinV2-G (HTC++)
Object DetectionCOCO test-devbox mAP63.1SwinV2-G (HTC++)
Object DetectionCOCO minivalbox AP62.5SwinV2-G (HTC++)
Image ClassificationImageNet V2Top 1 Accuracy78.08SwinV2-B
3DCOCO test-devParams (M)3000SwinV2-G (HTC++)
3DCOCO test-devbox mAP63.1SwinV2-G (HTC++)
3DCOCO minivalbox AP62.5SwinV2-G (HTC++)
Instance SegmentationCOCO minivalmask AP53.7SwinV2-G (HTC++)
Instance SegmentationCOCO test-devmask AP54.4SwinV2-G (HTC++)
2D ClassificationCOCO test-devParams (M)3000SwinV2-G (HTC++)
2D ClassificationCOCO test-devbox mAP63.1SwinV2-G (HTC++)
2D ClassificationCOCO minivalbox AP62.5SwinV2-G (HTC++)
2D Object DetectionCOCO test-devParams (M)3000SwinV2-G (HTC++)
2D Object DetectionCOCO test-devbox mAP63.1SwinV2-G (HTC++)
2D Object DetectionCOCO minivalbox AP62.5SwinV2-G (HTC++)
10-shot image generationADE20KValidation mIoU59.9SwinV2-G(UperNet)
10-shot image generationADE20KValidation mIoU53.7SwinV2-G-HTC++ Liu et al. ([2021a])
16kCOCO test-devParams (M)3000SwinV2-G (HTC++)
16kCOCO test-devbox mAP63.1SwinV2-G (HTC++)
16kCOCO minivalbox AP62.5SwinV2-G (HTC++)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17