TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ViTAE: Vision Transformer Advanced by Exploring Intrinsic ...

ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias

Yufei Xu, Qiming Zhang, Jing Zhang, DaCheng Tao

2021-06-07NeurIPS 2021 12Image ClassificationVideo Object SegmentationObject Detection
PaperPDFCode(official)Code

Abstract

Transformers have shown great potential in various computer vision tasks owing to their strong capability in modeling long-range dependency using the self-attention mechanism. Nevertheless, vision transformers treat an image as 1D sequence of visual tokens, lacking an intrinsic inductive bias (IB) in modeling local visual structures and dealing with scale variance. Alternatively, they require large-scale training data and longer training schedules to learn the IB implicitly. In this paper, we propose a novel Vision Transformer Advanced by Exploring intrinsic IB from convolutions, ie, ViTAE. Technically, ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context by using multiple convolutions with different dilation rates. In this way, it acquires an intrinsic scale invariance IB and is able to learn robust feature representation for objects at various scales. Moreover, in each transformer layer, ViTAE has a convolution block in parallel to the multi-head self-attention module, whose features are fused and fed into the feed-forward network. Consequently, it has the intrinsic locality IB and is able to learn local features and global dependencies collaboratively. Experiments on ImageNet as well as downstream tasks prove the superiority of ViTAE over the baseline transformer and concurrent works. Source code and pretrained models will be available at GitHub.

Results

TaskDatasetMetricValueModel
VideoDAVIS 2017F-Score85.5ViTAE-T-Stage
VideoDAVIS 2017J&F82.5ViTAE-T-Stage
VideoDAVIS 2017Jaccard (Mean)79.4ViTAE-T-Stage
VideoDAVIS 2016F-Score90.4ViTAE-T-Stage
VideoDAVIS 2016J&F89.8ViTAE-T-Stage
VideoDAVIS 2016Jaccard (Mean)89.2ViTAE-T-Stage
Image ClassificationImageNetGFLOPs27.6ViTAE-B-Stage
Image ClassificationImageNetGFLOPs12ViTAE-S-Stage
Image ClassificationImageNetGFLOPs6.8ViTAE-13M
Image ClassificationImageNetGFLOPs4ViTAE-6M
Image ClassificationImageNetGFLOPs4.6ViTAE-T-Stage
Image ClassificationImageNetGFLOPs3ViTAE-T
Video Object SegmentationDAVIS 2017F-Score85.5ViTAE-T-Stage
Video Object SegmentationDAVIS 2017J&F82.5ViTAE-T-Stage
Video Object SegmentationDAVIS 2017Jaccard (Mean)79.4ViTAE-T-Stage
Video Object SegmentationDAVIS 2016F-Score90.4ViTAE-T-Stage
Video Object SegmentationDAVIS 2016J&F89.8ViTAE-T-Stage
Video Object SegmentationDAVIS 2016Jaccard (Mean)89.2ViTAE-T-Stage

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17