Alexey Gritsenko, Xuehan Xiong, Josip Djolonga, Mostafa Dehghani, Chen Sun, Mario Lučić, Cordelia Schmid, Anurag Arnab
The most performant spatio-temporal action localisation models use external person proposals and complex external memory banks. We propose a fully end-to-end, purely-transformer based model that directly ingests an input video, and outputs tubelets -- a sequence of bounding boxes and the action classes at each frame. Our flexible model can be trained with either sparse bounding-box supervision on individual frames, or full tubelet annotations. And in both cases, it predicts coherent tubelets as the output. Moreover, our end-to-end model requires no additional pre-processing in the form of proposals, or post-processing in terms of non-maximal suppression. We perform extensive ablation experiments, and significantly advance the state-of-the-art results on four different spatio-temporal action localisation benchmarks with both sparse keyframes and full tubelet annotations.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Activity Recognition | AVA v2.1 | mAP (Val) | 41.7 | STAR/L |
| Activity Recognition | AVA v2.2 | mAP | 41.7 | STAR/L |
| Action Localization | AVA-Kinetics | val mAP | 41.7 | STAR/L |
| Action Detection | UCF101-24 | Frame-mAP 0.5 | 90.3 | STAR/L |
| Action Detection | UCF101-24 | Video-mAP 0.2 | 88 | STAR/L |
| Action Detection | UCF101-24 | Video-mAP 0.5 | 71.8 | STAR/L |
| Action Recognition | AVA v2.1 | mAP (Val) | 41.7 | STAR/L |
| Action Recognition | AVA v2.2 | mAP | 41.7 | STAR/L |