Yue Zhao, Philipp Krähenbühl
Videos are big, complex to pre-process, and slow to train on. State-of-the-art large-scale video models are trained on clusters of 32 or more GPUs for several days. As a consequence, academia largely ceded the training of large video models to industry. In this paper, we show how to still train a state-of-the-art video model on a single machine with eight consumer-grade GPUs in a day. We identify three bottlenecks, IO, CPU, and GPU computation, and optimize each. The result is a highly efficient video training pipeline. For comparable architectures, our pipeline achieves higher accuracies with $\frac{1}{8}$ of the computation compared to prior work. Code is available at https://github.com/zhaoyue-zephyrus/AVION.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Activity Recognition | EPIC-KITCHENS-100 | Action@1 | 54.4 | Avion (ViT-L) |
| Activity Recognition | EPIC-KITCHENS-100 | Noun@1 | 65.4 | Avion (ViT-L) |
| Activity Recognition | EPIC-KITCHENS-100 | Verb@1 | 73 | Avion (ViT-L) |
| Action Recognition | EPIC-KITCHENS-100 | Action@1 | 54.4 | Avion (ViT-L) |
| Action Recognition | EPIC-KITCHENS-100 | Noun@1 | 65.4 | Avion (ViT-L) |
| Action Recognition | EPIC-KITCHENS-100 | Verb@1 | 73 | Avion (ViT-L) |