Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo
Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536$\times$1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time. Code is available at \url{https://github.com/microsoft/Swin-Transformer}.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | Kinetics-400 | Acc@1 | 86.8 | Video-SwinV2-G (ImageNet-22k and external 70M pretrain) |
| Semantic Segmentation | ADE20K | Validation mIoU | 59.9 | SwinV2-G(UperNet) |
| Semantic Segmentation | ADE20K | Validation mIoU | 53.7 | SwinV2-G-HTC++ Liu et al. ([2021a]) |
| Object Detection | COCO test-dev | Params (M) | 3000 | SwinV2-G (HTC++) |
| Object Detection | COCO test-dev | box mAP | 63.1 | SwinV2-G (HTC++) |
| Object Detection | COCO minival | box AP | 62.5 | SwinV2-G (HTC++) |
| Image Classification | ImageNet V2 | Top 1 Accuracy | 78.08 | SwinV2-B |
| 3D | COCO test-dev | Params (M) | 3000 | SwinV2-G (HTC++) |
| 3D | COCO test-dev | box mAP | 63.1 | SwinV2-G (HTC++) |
| 3D | COCO minival | box AP | 62.5 | SwinV2-G (HTC++) |
| Instance Segmentation | COCO minival | mask AP | 53.7 | SwinV2-G (HTC++) |
| Instance Segmentation | COCO test-dev | mask AP | 54.4 | SwinV2-G (HTC++) |
| 2D Classification | COCO test-dev | Params (M) | 3000 | SwinV2-G (HTC++) |
| 2D Classification | COCO test-dev | box mAP | 63.1 | SwinV2-G (HTC++) |
| 2D Classification | COCO minival | box AP | 62.5 | SwinV2-G (HTC++) |
| 2D Object Detection | COCO test-dev | Params (M) | 3000 | SwinV2-G (HTC++) |
| 2D Object Detection | COCO test-dev | box mAP | 63.1 | SwinV2-G (HTC++) |
| 2D Object Detection | COCO minival | box AP | 62.5 | SwinV2-G (HTC++) |
| 10-shot image generation | ADE20K | Validation mIoU | 59.9 | SwinV2-G(UperNet) |
| 10-shot image generation | ADE20K | Validation mIoU | 53.7 | SwinV2-G-HTC++ Liu et al. ([2021a]) |
| 16k | COCO test-dev | Params (M) | 3000 | SwinV2-G (HTC++) |
| 16k | COCO test-dev | box mAP | 63.1 | SwinV2-G (HTC++) |
| 16k | COCO minival | box AP | 62.5 | SwinV2-G (HTC++) |