TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Moments in Time Dataset: one million videos for event unde...

Moments in Time Dataset: one million videos for event understanding

Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Tom Yan, Lisa Brown, Quanfu Fan, Dan Gutfruend, Carl Vondrick, Aude Oliva

2018-01-09Multimodal Activity RecognitionAction RecognitionTemporal Action Localization
PaperPDFCodeCodeCodeCode

Abstract

We present the Moments in Time Dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds. Modeling the spatial-audio-temporal dynamics even for actions occurring in 3 second videos poses many challenges: meaningful events do not include only people, but also objects, animals, and natural phenomena; visual and auditory events can be symmetrical in time ("opening" is "closing" in reverse), and either transient or sustained. We describe the annotation process of our dataset (each video is tagged with one action or activity label among 339 different classes), analyze its scale and diversity in comparison to other large-scale video datasets for action recognition, and report results of several baseline models addressing separately, and jointly, three modalities: spatial, temporal and auditory. The Moments in Time dataset, designed to have a large coverage and diversity of events in both visual and auditory modalities, can serve as a new challenge to develop models that scale to the level of complexity and abstract reasoning that a human processes on a daily basis.

Results

TaskDatasetMetricValueModel
VideoMiTTop 1 Accuracy28.27TRN-Multiscale
VideoMiTTop 5 Accuracy53.87TRN-Multiscale
Activity RecognitionSomething-Something V1Top 1 Accuracy50ResNet50 I3D (Moments pretrained)
Activity RecognitionSomething-Something V1Top 1 Accuracy48.6ResNet50 I3D (Kinetics pretrained)
Activity RecognitionMoments in Time DatasetTop-1 (%)31.16Ensemble (SVM)
Activity RecognitionMoments in Time DatasetTop-5 (%)57.67Ensemble (SVM)
Activity RecognitionMoments in Time DatasetTop-1 (%)29.51I3D
Activity RecognitionMoments in Time DatasetTop-5 (%)56.06I3D
Activity RecognitionMoments in Time DatasetTop-1 (%)28.27TRN-Multiscale
Activity RecognitionMoments in Time DatasetTop-5 (%)53.87TRN-Multiscale
Activity RecognitionMoments in Time DatasetTop-1 (%)15.71TSN-Flow
Activity RecognitionMoments in Time DatasetTop-5 (%)34.65TSN-Flow
Activity RecognitionMoments in Time DatasetTop-1 (%)7.6SoundNet
Activity RecognitionMoments in Time DatasetTop-5 (%)18SoundNet
Action RecognitionSomething-Something V1Top 1 Accuracy50ResNet50 I3D (Moments pretrained)
Action RecognitionSomething-Something V1Top 1 Accuracy48.6ResNet50 I3D (Kinetics pretrained)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22