TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Training a Large Video Model on a Single Machine in a Day

Training a Large Video Model on a Single Machine in a Day

Yue Zhao, Philipp Krähenbühl

2023-09-28Multi-Instance RetrievalAction Recognition
PaperPDFCode(official)

Abstract

Videos are big, complex to pre-process, and slow to train on. State-of-the-art large-scale video models are trained on clusters of 32 or more GPUs for several days. As a consequence, academia largely ceded the training of large video models to industry. In this paper, we show how to still train a state-of-the-art video model on a single machine with eight consumer-grade GPUs in a day. We identify three bottlenecks, IO, CPU, and GPU computation, and optimize each. The result is a highly efficient video training pipeline. For comparable architectures, our pipeline achieves higher accuracies with $\frac{1}{8}$ of the computation compared to prior work. Code is available at https://github.com/zhaoyue-zephyrus/AVION.

Results

TaskDatasetMetricValueModel
Activity RecognitionEPIC-KITCHENS-100Action@154.4Avion (ViT-L)
Activity RecognitionEPIC-KITCHENS-100Noun@165.4Avion (ViT-L)
Activity RecognitionEPIC-KITCHENS-100Verb@173Avion (ViT-L)
Action RecognitionEPIC-KITCHENS-100Action@154.4Avion (ViT-L)
Action RecognitionEPIC-KITCHENS-100Noun@165.4Avion (ViT-L)
Action RecognitionEPIC-KITCHENS-100Verb@173Avion (ViT-L)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22EVA02-AT: Egocentric Video-Language Understanding with Spatial-Temporal Rotary Positional Embeddings and Symmetric Optimization2025-06-17