TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Spatio-Temporal Representation with Pseudo-3D Res...

Learning Spatio-Temporal Representation with Pseudo-3D Residual Networks

Zhaofan Qiu, Ting Yao, Tao Mei

2017-11-28ICCV 2017 10PhilosophyVideo ClassificationAction Recognition
PaperPDFCode(official)Code

Abstract

Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating $3\times3\times3$ convolutions with $1\times3\times3$ convolutional filters on spatial domain (equivalent to 2D CNN) plus $3\times1\times1$ convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3% and 1.8%, respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.

Results

TaskDatasetMetricValueModel
Activity RecognitionSports-1MClip Hit@147.9P3D
Activity RecognitionSports-1MVideo hit@1 66.4P3D
Activity RecognitionSports-1MVideo hit@587.4P3D
Activity RecognitionActivityNetmAP78.9P3D
Activity RecognitionUCF1013-fold Accuracy88.6P3D (ImageNet + Sports1M)
Action RecognitionSports-1MClip Hit@147.9P3D
Action RecognitionSports-1MVideo hit@1 66.4P3D
Action RecognitionSports-1MVideo hit@587.4P3D
Action RecognitionActivityNetmAP78.9P3D
Action RecognitionUCF1013-fold Accuracy88.6P3D (ImageNet + Sports1M)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Streaming 4D Visual Geometry Transformer2025-07-15AI-Reporter: A Path to a New Genre of Scientific Communication2025-07-08Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01ActAlign: Zero-Shot Fine-Grained Video Classification via Language-Guided Sequence Alignment2025-06-28EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25