Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, Jianfeng Gao
This paper presents a new Vision Transformer (ViT) architecture Multi-Scale Vision Longformer, which significantly enhances the ViT of \cite{dosovitskiy2020image} for encoding high-resolution images using two techniques. The first is the multi-scale model structure, which provides image encodings at multiple scales with manageable computational cost. The second is the attention mechanism of vision Longformer, which is a variant of Longformer \cite{beltagy2020longformer}, originally developed for natural language processing, and achieves a linear complexity w.r.t. the number of input tokens. A comprehensive empirical study shows that the new ViT significantly outperforms several strong baselines, including the existing ViT models and their ResNet counterparts, and the Pyramid Vision Transformer from a concurrent work \cite{wang2021pyramid}, on a range of vision tasks, including image classification, object detection, and segmentation. The models and source code are released at \url{https://github.com/microsoft/vision-longformer}.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Object Detection | COCO minival | AP75 | 47.6 | RetinaNet (ViL-Base, multi-scale, 3x) |
| Object Detection | COCO minival | APL | 58.1 | RetinaNet (ViL-Base, multi-scale, 3x) |
| Object Detection | COCO minival | APM | 48 | RetinaNet (ViL-Base, multi-scale, 3x) |
| Object Detection | COCO minival | APS | 29.9 | RetinaNet (ViL-Base, multi-scale, 3x) |
| Object Detection | COCO minival | box AP | 44.7 | RetinaNet (ViL-Base, multi-scale, 3x) |
| Object Detection | COCO minival | AP50 | 65.5 | RetinaNet (ViL-Base) |
| Object Detection | COCO minival | AP75 | 47.1 | RetinaNet (ViL-Base) |
| Object Detection | COCO minival | APL | 58.3 | RetinaNet (ViL-Base) |
| Object Detection | COCO minival | APM | 47.9 | RetinaNet (ViL-Base) |
| Object Detection | COCO minival | APS | 28.9 | RetinaNet (ViL-Base) |
| Object Detection | COCO minival | box AP | 44.3 | RetinaNet (ViL-Base) |
| Image Classification | ImageNet | GFLOPs | 8.7 | ViL-Medium-D |
| Image Classification | ImageNet | GFLOPs | 13.4 | ViL-Base-D |
| Image Classification | ImageNet | GFLOPs | 4.86 | ViL-Small |
| Image Classification | ImageNet | GFLOPs | 6.74 | ViL-Base-W |
| Image Classification | ImageNet | GFLOPs | 1.3 | ViL-Tiny-RPB |
| 3D | COCO minival | AP75 | 47.6 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 3D | COCO minival | APL | 58.1 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 3D | COCO minival | APM | 48 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 3D | COCO minival | APS | 29.9 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 3D | COCO minival | box AP | 44.7 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 3D | COCO minival | AP50 | 65.5 | RetinaNet (ViL-Base) |
| 3D | COCO minival | AP75 | 47.1 | RetinaNet (ViL-Base) |
| 3D | COCO minival | APL | 58.3 | RetinaNet (ViL-Base) |
| 3D | COCO minival | APM | 47.9 | RetinaNet (ViL-Base) |
| 3D | COCO minival | APS | 28.9 | RetinaNet (ViL-Base) |
| 3D | COCO minival | box AP | 44.3 | RetinaNet (ViL-Base) |
| Instance Segmentation | COCO minival | AP75 | 49.9 | Mask R-CNN (ViL Base, multi-scale, 3x lr) |
| Instance Segmentation | COCO minival | mask AP | 45.7 | Mask R-CNN (ViL Base, multi-scale, 3x lr) |
| Instance Segmentation | COCO minival | AP50 | 67.2 | Mask R-CNN (ViL Base, 1x lr) |
| Instance Segmentation | COCO minival | AP75 | 49.3 | Mask R-CNN (ViL Base, 1x lr) |
| Instance Segmentation | COCO minival | mask AP | 45.1 | Mask R-CNN (ViL Base, 1x lr) |
| 2D Classification | COCO minival | AP75 | 47.6 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Classification | COCO minival | APL | 58.1 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Classification | COCO minival | APM | 48 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Classification | COCO minival | APS | 29.9 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Classification | COCO minival | box AP | 44.7 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Classification | COCO minival | AP50 | 65.5 | RetinaNet (ViL-Base) |
| 2D Classification | COCO minival | AP75 | 47.1 | RetinaNet (ViL-Base) |
| 2D Classification | COCO minival | APL | 58.3 | RetinaNet (ViL-Base) |
| 2D Classification | COCO minival | APM | 47.9 | RetinaNet (ViL-Base) |
| 2D Classification | COCO minival | APS | 28.9 | RetinaNet (ViL-Base) |
| 2D Classification | COCO minival | box AP | 44.3 | RetinaNet (ViL-Base) |
| 2D Object Detection | COCO minival | AP75 | 47.6 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Object Detection | COCO minival | APL | 58.1 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Object Detection | COCO minival | APM | 48 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Object Detection | COCO minival | APS | 29.9 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Object Detection | COCO minival | box AP | 44.7 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 2D Object Detection | COCO minival | AP50 | 65.5 | RetinaNet (ViL-Base) |
| 2D Object Detection | COCO minival | AP75 | 47.1 | RetinaNet (ViL-Base) |
| 2D Object Detection | COCO minival | APL | 58.3 | RetinaNet (ViL-Base) |
| 2D Object Detection | COCO minival | APM | 47.9 | RetinaNet (ViL-Base) |
| 2D Object Detection | COCO minival | APS | 28.9 | RetinaNet (ViL-Base) |
| 2D Object Detection | COCO minival | box AP | 44.3 | RetinaNet (ViL-Base) |
| 16k | COCO minival | AP75 | 47.6 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 16k | COCO minival | APL | 58.1 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 16k | COCO minival | APM | 48 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 16k | COCO minival | APS | 29.9 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 16k | COCO minival | box AP | 44.7 | RetinaNet (ViL-Base, multi-scale, 3x) |
| 16k | COCO minival | AP50 | 65.5 | RetinaNet (ViL-Base) |
| 16k | COCO minival | AP75 | 47.1 | RetinaNet (ViL-Base) |
| 16k | COCO minival | APL | 58.3 | RetinaNet (ViL-Base) |
| 16k | COCO minival | APM | 47.9 | RetinaNet (ViL-Base) |
| 16k | COCO minival | APS | 28.9 | RetinaNet (ViL-Base) |
| 16k | COCO minival | box AP | 44.3 | RetinaNet (ViL-Base) |