TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets/Something-Something V2

Something-Something V2

ImagesVideosCustomIntroduced 2017-01-01

The 20BN-SOMETHING-SOMETHING V2 dataset is a large collection of labeled video clips that show humans performing pre-defined basic actions with everyday objects. The dataset was created by a large number of crowd workers. It allows machine learning models to develop fine-grained understanding of basic actions that occur in the physical world. It contains 220,847 videos, with 168,913 in the training set, 24,777 in the validation set and 27,157 in the test set. There are 174 labels.

Source

Image Source

Benchmarks

Abnormal Event Detection In Video/Avg. ROC-AUCAbnormal Event Detection In Video/ArchitectureAction Recognition/Top-1 AccuracyAction Recognition/Top-5 AccuracyAction Recognition/ParametersAction Recognition/GFLOPsAction Recognition In Videos/Top-1 AccuracyAction Recognition In Videos/Top-5 AccuracyActivity Recognition/Top-1 AccuracyActivity Recognition/Top-5 AccuracyActivity Recognition/ParametersActivity Recognition/GFLOPsAnomaly Detection/Avg. ROC-AUCAnomaly Detection/ArchitectureSemi-supervised Anomaly Detection/Avg. ROC-AUCSemi-supervised Anomaly Detection/ArchitectureText-to-Video Generation/FVDVideo/Acc@1Video/Acc@5Video/FVDVideo/Top-5 AccuracyVideo Classification/Top-5 AccuracyVideo Prediction/FVD

Statistics

Papers
290
Benchmarks
23

Links

Homepage

Tasks

Abnormal Event Detection In VideoAction ClassificationAction RecognitionAction Recognition In VideosActivity RecognitionAnomaly DetectionEarly Action PredictionGeneral Action Video Anomaly DetectionSemi-supervised Anomaly DetectionText-to-Video GenerationVideoVideo ClassificationVideo Prediction