TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Omni-sourced Webly-supervised Learning for Video Recognition

Omni-sourced Webly-supervised Learning for Video Recognition

Haodong Duan, Yue Zhao, Yuanjun Xiong, Wentao Liu, Dahua Lin

2020-03-29ECCV 2020 8Action ClassificationVideo RecognitionAction Recognition
PaperPDFCode(official)CodeCode

Abstract

We introduce OmniSource, a novel framework for leveraging web data to train video recognition models. OmniSource overcomes the barriers between data formats, such as images, short videos, and long untrimmed videos for webly-supervised learning. First, data samples with multiple formats, curated by task-specific data collection and automatically filtered by a teacher model, are transformed into a unified form. Then a joint-training strategy is proposed to deal with the domain gaps between multiple data sources and formats in webly-supervised learning. Several good practices, including data balancing, resampling, and cross-dataset mixup are adopted in joint training. Experiments show that by utilizing data from multiple sources and formats, OmniSource is more data-efficient in training. With only 3.5M images and 800K minutes videos crawled from the internet without human labeling (less than 2% of prior works), our models learned with OmniSource improve Top-1 accuracy of 2D- and 3D-ConvNet baseline models by 3.0% and 3.9%, respectively, on the Kinetics-400 benchmark. With OmniSource, we establish new records with different pretraining strategies for video recognition. Our best models achieve 80.4%, 80.5%, and 83.6 Top-1 accuracies on the Kinetics-400 benchmark respectively for training-from-scratch, ImageNet pre-training and IG-65M pre-training.

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@183.6OmniSource irCSN-152 (IG-Kinetics-65M pretrain)
VideoKinetics-400Acc@180.5OmniSource SlowOnly R101 8x8(ImageNet pretrain)
VideoKinetics-400Acc@594.4OmniSource SlowOnly R101 8x8(ImageNet pretrain)
VideoKinetics-400Acc@180.4OmniSource SlowOnly R101 8x8 (Scratch)
VideoKinetics-400Acc@594.4OmniSource SlowOnly R101 8x8 (Scratch)
Activity RecognitionHMDB-51Average accuracy of 3 splits83.8OmniSource (SlowOnly-8x8-R101-RGB + I3D Flow)
Activity RecognitionUCF1013-fold Accuracy98.6OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)
Action RecognitionHMDB-51Average accuracy of 3 splits83.8OmniSource (SlowOnly-8x8-R101-RGB + I3D Flow)
Action RecognitionUCF1013-fold Accuracy98.6OmniSource (SlowOnly-8x8-R101-RGB + I3D-Flow)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22