Chenglin Yang, Siyuan Qiao, Qihang Yu, Xiaoding Yuan, Yukun Zhu, Alan Yuille, Hartwig Adam, Liang-Chieh Chen
This paper presents MOAT, a family of neural networks that build on top of MObile convolution (i.e., inverted residual blocks) and ATtention. Unlike the current works that stack separate mobile convolution and transformer blocks, we effectively merge them into a MOAT block. Starting with a standard Transformer block, we replace its multi-layer perceptron with a mobile convolution block, and further reorder it before the self-attention operation. The mobile convolution block not only enhances the network representation capacity, but also produces better downsampled features. Our conceptually simple MOAT networks are surprisingly effective, achieving 89.1% / 81.5% top-1 accuracy on ImageNet-1K / ImageNet-1K-V2 with ImageNet22K pretraining. Additionally, MOAT can be seamlessly applied to downstream tasks that require large resolution inputs by simply converting the global attention to window attention. Thanks to the mobile convolution that effectively exchanges local information between pixels (and thus cross-windows), MOAT does not need the extra window-shifting mechanism. As a result, on COCO object detection, MOAT achieves 59.2% box AP with 227M model parameters (single-scale inference, and hard NMS), and on ADE20K semantic segmentation, MOAT attains 57.6% mIoU with 496M model parameters (single-scale inference). Finally, the tiny-MOAT family, obtained by simply reducing the channel sizes, also surprisingly outperforms several mobile-specific transformer-based models on ImageNet. The tiny-MOAT family is also benchmarked on downstream tasks, serving as a baseline for the community. We hope our simple yet effective MOAT will inspire more seamless integration of convolution and self-attention. Code is publicly available.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Semantic Segmentation | ADE20K | Params (M) | 496 | MOAT-4 (IN-22K pretraining, single-scale) |
| Semantic Segmentation | ADE20K | Validation mIoU | 57.6 | MOAT-4 (IN-22K pretraining, single-scale) |
| Semantic Segmentation | ADE20K | Params (M) | 198 | MOAT-3 (IN-22K pretraining, single-scale) |
| Semantic Segmentation | ADE20K | Validation mIoU | 56.5 | MOAT-3 (IN-22K pretraining, single-scale) |
| Semantic Segmentation | ADE20K | Params (M) | 81 | MOAT-2 (IN-22K pretraining, single-scale) |
| Semantic Segmentation | ADE20K | Validation mIoU | 54.7 | MOAT-2 (IN-22K pretraining, single-scale) |
| Semantic Segmentation | ADE20K | Params (M) | 24 | tiny-MOAT-3 (IN-1K pretraining, single scale) |
| Semantic Segmentation | ADE20K | Validation mIoU | 47.5 | tiny-MOAT-3 (IN-1K pretraining, single scale) |
| Semantic Segmentation | ADE20K | Params (M) | 13 | tiny-MOAT-2 (IN-1K pretraining, single scale) |
| Semantic Segmentation | ADE20K | Validation mIoU | 44.9 | tiny-MOAT-2 (IN-1K pretraining, single scale) |
| Semantic Segmentation | ADE20K | Params (M) | 8 | tiny-MOAT-1 (IN-1K pretraining, single scale) |
| Semantic Segmentation | ADE20K | Validation mIoU | 43.1 | tiny-MOAT-1 (IN-1K pretraining, single scale) |
| Semantic Segmentation | ADE20K | Params (M) | 6 | tiny-MOAT-0 (IN-1K pretraining, single scale) |
| Semantic Segmentation | ADE20K | Validation mIoU | 41.2 | tiny-MOAT-0 (IN-1K pretraining, single scale) |
| Object Detection | COCO (Common Objects in Context) | box AP | 59.2 | MOAT-3 22K+1K |
| Object Detection | COCO (Common Objects in Context) | box AP | 58.5 | MOAT-2 |
| Object Detection | COCO minival | box AP | 59.2 | MOAT-3 (IN-22K pretraining, single-scale) |
| Object Detection | COCO minival | box AP | 58.5 | MOAT-2 (IN-22K pretraining, single-scale) |
| Object Detection | COCO minival | box AP | 57.7 | MOAT-1 (IN-1K pretraining, single-scale) |
| Object Detection | COCO minival | box AP | 55.9 | MOAT-0 (IN-1K pretraining, single-scale) |
| Object Detection | COCO minival | box AP | 55.2 | tiny-MOAT-3 (IN-1K pretraining, single-scale) |
| Object Detection | COCO minival | box AP | 53 | tiny-MOAT-2 (IN-1K pretraining, single-scale) |
| Object Detection | COCO minival | box AP | 51.9 | tiny-MOAT-1 (IN-1K pretraining, single-scale) |
| Object Detection | COCO minival | box AP | 50.5 | tiny-MOAT-0 (IN-1K pretraining, single-scale) |
| Image Classification | ImageNet V2 | Top 1 Accuracy | 81.5 | MOAT-4 (IN-22K pretraining) |
| Image Classification | ImageNet V2 | Top 1 Accuracy | 80.6 | MOAT-3 (IN-22K pretraining) |
| Image Classification | ImageNet V2 | Top 1 Accuracy | 79.3 | MOAT-2 (IN-22K pretraining) |
| Image Classification | ImageNet V2 | Top 1 Accuracy | 78.4 | MOAT-1 (IN-22K pretraining) |
| Image Classification | ImageNet | GFLOPs | 648.5 | MOAT-4 22K+1K |
| Image Classification | ImageNet | GFLOPs | 271 | MOAT-3 1K only |
| Image Classification | ImageNet | GFLOPs | 5.7 | MOAT-0 1K only |
| 3D | COCO (Common Objects in Context) | box AP | 59.2 | MOAT-3 22K+1K |
| 3D | COCO (Common Objects in Context) | box AP | 58.5 | MOAT-2 |
| 3D | COCO minival | box AP | 59.2 | MOAT-3 (IN-22K pretraining, single-scale) |
| 3D | COCO minival | box AP | 58.5 | MOAT-2 (IN-22K pretraining, single-scale) |
| 3D | COCO minival | box AP | 57.7 | MOAT-1 (IN-1K pretraining, single-scale) |
| 3D | COCO minival | box AP | 55.9 | MOAT-0 (IN-1K pretraining, single-scale) |
| 3D | COCO minival | box AP | 55.2 | tiny-MOAT-3 (IN-1K pretraining, single-scale) |
| 3D | COCO minival | box AP | 53 | tiny-MOAT-2 (IN-1K pretraining, single-scale) |
| 3D | COCO minival | box AP | 51.9 | tiny-MOAT-1 (IN-1K pretraining, single-scale) |
| 3D | COCO minival | box AP | 50.5 | tiny-MOAT-0 (IN-1K pretraining, single-scale) |
| Instance Segmentation | COCO minival | mask AP | 50.3 | MOAT-3 (IN-22K pretraining, single-scale) |
| Instance Segmentation | COCO minival | mask AP | 49.3 | MOAT-2 (IN-22K pretraining, single-scale) |
| Instance Segmentation | COCO minival | mask AP | 49 | MOAT-1 (IN-1K pretraining, single-scale) |
| Instance Segmentation | COCO minival | mask AP | 47.4 | MOAT-0 (IN-1K pretraining, single-scale) |
| Instance Segmentation | COCO minival | mask AP | 47 | tiny-MOAT-3 (IN-1K pretraining, single-scale) |
| Instance Segmentation | COCO minival | mask AP | 45 | tiny-MOAT-2 (IN-1K pretraining, single-scale) |
| Instance Segmentation | COCO minival | mask AP | 44.6 | tiny-MOAT-1 (IN-1K pretraining, single-scale) |
| Instance Segmentation | COCO minival | mask AP | 43.3 | tiny-MOAT-0 (IN-1K pretraining, single-scale) |
| 2D Classification | COCO (Common Objects in Context) | box AP | 59.2 | MOAT-3 22K+1K |
| 2D Classification | COCO (Common Objects in Context) | box AP | 58.5 | MOAT-2 |
| 2D Classification | COCO minival | box AP | 59.2 | MOAT-3 (IN-22K pretraining, single-scale) |
| 2D Classification | COCO minival | box AP | 58.5 | MOAT-2 (IN-22K pretraining, single-scale) |
| 2D Classification | COCO minival | box AP | 57.7 | MOAT-1 (IN-1K pretraining, single-scale) |
| 2D Classification | COCO minival | box AP | 55.9 | MOAT-0 (IN-1K pretraining, single-scale) |
| 2D Classification | COCO minival | box AP | 55.2 | tiny-MOAT-3 (IN-1K pretraining, single-scale) |
| 2D Classification | COCO minival | box AP | 53 | tiny-MOAT-2 (IN-1K pretraining, single-scale) |
| 2D Classification | COCO minival | box AP | 51.9 | tiny-MOAT-1 (IN-1K pretraining, single-scale) |
| 2D Classification | COCO minival | box AP | 50.5 | tiny-MOAT-0 (IN-1K pretraining, single-scale) |
| 2D Object Detection | COCO (Common Objects in Context) | box AP | 59.2 | MOAT-3 22K+1K |
| 2D Object Detection | COCO (Common Objects in Context) | box AP | 58.5 | MOAT-2 |
| 2D Object Detection | COCO minival | box AP | 59.2 | MOAT-3 (IN-22K pretraining, single-scale) |
| 2D Object Detection | COCO minival | box AP | 58.5 | MOAT-2 (IN-22K pretraining, single-scale) |
| 2D Object Detection | COCO minival | box AP | 57.7 | MOAT-1 (IN-1K pretraining, single-scale) |
| 2D Object Detection | COCO minival | box AP | 55.9 | MOAT-0 (IN-1K pretraining, single-scale) |
| 2D Object Detection | COCO minival | box AP | 55.2 | tiny-MOAT-3 (IN-1K pretraining, single-scale) |
| 2D Object Detection | COCO minival | box AP | 53 | tiny-MOAT-2 (IN-1K pretraining, single-scale) |
| 2D Object Detection | COCO minival | box AP | 51.9 | tiny-MOAT-1 (IN-1K pretraining, single-scale) |
| 2D Object Detection | COCO minival | box AP | 50.5 | tiny-MOAT-0 (IN-1K pretraining, single-scale) |
| 10-shot image generation | ADE20K | Params (M) | 496 | MOAT-4 (IN-22K pretraining, single-scale) |
| 10-shot image generation | ADE20K | Validation mIoU | 57.6 | MOAT-4 (IN-22K pretraining, single-scale) |
| 10-shot image generation | ADE20K | Params (M) | 198 | MOAT-3 (IN-22K pretraining, single-scale) |
| 10-shot image generation | ADE20K | Validation mIoU | 56.5 | MOAT-3 (IN-22K pretraining, single-scale) |
| 10-shot image generation | ADE20K | Params (M) | 81 | MOAT-2 (IN-22K pretraining, single-scale) |
| 10-shot image generation | ADE20K | Validation mIoU | 54.7 | MOAT-2 (IN-22K pretraining, single-scale) |
| 10-shot image generation | ADE20K | Params (M) | 24 | tiny-MOAT-3 (IN-1K pretraining, single scale) |
| 10-shot image generation | ADE20K | Validation mIoU | 47.5 | tiny-MOAT-3 (IN-1K pretraining, single scale) |
| 10-shot image generation | ADE20K | Params (M) | 13 | tiny-MOAT-2 (IN-1K pretraining, single scale) |
| 10-shot image generation | ADE20K | Validation mIoU | 44.9 | tiny-MOAT-2 (IN-1K pretraining, single scale) |
| 10-shot image generation | ADE20K | Params (M) | 8 | tiny-MOAT-1 (IN-1K pretraining, single scale) |
| 10-shot image generation | ADE20K | Validation mIoU | 43.1 | tiny-MOAT-1 (IN-1K pretraining, single scale) |
| 10-shot image generation | ADE20K | Params (M) | 6 | tiny-MOAT-0 (IN-1K pretraining, single scale) |
| 10-shot image generation | ADE20K | Validation mIoU | 41.2 | tiny-MOAT-0 (IN-1K pretraining, single scale) |
| 16k | COCO (Common Objects in Context) | box AP | 59.2 | MOAT-3 22K+1K |
| 16k | COCO (Common Objects in Context) | box AP | 58.5 | MOAT-2 |
| 16k | COCO minival | box AP | 59.2 | MOAT-3 (IN-22K pretraining, single-scale) |
| 16k | COCO minival | box AP | 58.5 | MOAT-2 (IN-22K pretraining, single-scale) |
| 16k | COCO minival | box AP | 57.7 | MOAT-1 (IN-1K pretraining, single-scale) |
| 16k | COCO minival | box AP | 55.9 | MOAT-0 (IN-1K pretraining, single-scale) |
| 16k | COCO minival | box AP | 55.2 | tiny-MOAT-3 (IN-1K pretraining, single-scale) |
| 16k | COCO minival | box AP | 53 | tiny-MOAT-2 (IN-1K pretraining, single-scale) |
| 16k | COCO minival | box AP | 51.9 | tiny-MOAT-1 (IN-1K pretraining, single-scale) |
| 16k | COCO minival | box AP | 50.5 | tiny-MOAT-0 (IN-1K pretraining, single-scale) |