TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Keeping Your Eye on the Ball: Trajectory Attention in Vide...

Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers

Mandela Patrick, Dylan Campbell, Yuki M. Asano, Ishan Misra, Florian Metze, Christoph Feichtenhofer, Andrea Vedaldi, João F. Henriques

2021-06-09NeurIPS 2021 12Action ClassificationAction RecognitionTemporal Action Localization
PaperPDFCode(official)Code

Abstract

In video transformers, the time dimension is often treated in the same way as the two spatial dimensions. However, in a scene where objects or the camera may move, a physical point imaged at one location in frame $t$ may be entirely unrelated to what is found at that location in frame $t+k$. These temporal correspondences should be modeled to facilitate learning about dynamic scenes. To this end, we propose a new drop-in block for video transformers -- trajectory attention -- that aggregates information along implicitly determined motion paths. We additionally propose a new method to address the quadratic dependence of computation and memory on the input size, which is particularly important for high resolution or long videos. While these ideas are useful in a range of settings, we apply them to the specific task of video action recognition with a transformer model and obtain state-of-the-art results on the Kinetics, Something--Something V2, and Epic-Kitchens datasets. Code and models are available at: https://github.com/facebookresearch/Motionformer

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@181.1Motionformer-HR
VideoKinetics-400Acc@595.2Motionformer-HR
Activity RecognitionEPIC-KITCHENS-100Action@144.5Mformer-HR
Activity RecognitionEPIC-KITCHENS-100Noun@158.5Mformer-HR
Activity RecognitionEPIC-KITCHENS-100Verb@167Mformer-HR
Activity RecognitionEPIC-KITCHENS-100Action@144.1Mformer-L
Activity RecognitionEPIC-KITCHENS-100Noun@157.6Mformer-L
Activity RecognitionEPIC-KITCHENS-100Verb@167.1Mformer-L
Activity RecognitionEPIC-KITCHENS-100Action@143.1Mformer
Activity RecognitionEPIC-KITCHENS-100Noun@156.5Mformer
Activity RecognitionEPIC-KITCHENS-100Verb@166.7Mformer
Activity RecognitionSomething-Something V2Top-1 Accuracy68.1Mformer-L
Activity RecognitionSomething-Something V2Top-5 Accuracy91.2Mformer-L
Activity RecognitionSomething-Something V2Top-1 Accuracy67.1Mformer-HR
Activity RecognitionSomething-Something V2Top-5 Accuracy90.6Mformer-HR
Activity RecognitionSomething-Something V2Top-1 Accuracy66.5Mformer
Activity RecognitionSomething-Something V2Top-5 Accuracy90.1Mformer
Action RecognitionEPIC-KITCHENS-100Action@144.5Mformer-HR
Action RecognitionEPIC-KITCHENS-100Noun@158.5Mformer-HR
Action RecognitionEPIC-KITCHENS-100Verb@167Mformer-HR
Action RecognitionEPIC-KITCHENS-100Action@144.1Mformer-L
Action RecognitionEPIC-KITCHENS-100Noun@157.6Mformer-L
Action RecognitionEPIC-KITCHENS-100Verb@167.1Mformer-L
Action RecognitionEPIC-KITCHENS-100Action@143.1Mformer
Action RecognitionEPIC-KITCHENS-100Noun@156.5Mformer
Action RecognitionEPIC-KITCHENS-100Verb@166.7Mformer
Action RecognitionSomething-Something V2Top-1 Accuracy68.1Mformer-L
Action RecognitionSomething-Something V2Top-5 Accuracy91.2Mformer-L
Action RecognitionSomething-Something V2Top-1 Accuracy67.1Mformer-HR
Action RecognitionSomething-Something V2Top-5 Accuracy90.6Mformer-HR
Action RecognitionSomething-Something V2Top-1 Accuracy66.5Mformer
Action RecognitionSomething-Something V2Top-5 Accuracy90.1Mformer

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22