TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Resource Efficient 3D Convolutional Neural Networks

Resource Efficient 3D Convolutional Neural Networks

Okan Köpüklü, Neslihan Kose, Ahmet Gunduz, Gerhard Rigoll

2019-04-04Transfer LearningAction RecognitionAction Recognition In Videos
PaperPDFCodeCode(official)

Abstract

Recently, convolutional neural networks with 3D kernels (3D CNNs) have been very popular in computer vision community as a result of their superior ability of extracting spatio-temporal features within video frames compared to 2D CNNs. Although there has been great advances recently to build resource efficient 2D CNN architectures considering memory and power budget, there is hardly any similar resource efficient architectures for 3D CNNs. In this paper, we have converted various well-known resource efficient 2D CNNs to 3D CNNs and evaluated their performance on three major benchmarks in terms of classification accuracy for different complexity levels. We have experimented on (1) Kinetics-600 dataset to inspect their capacity to learn, (2) Jester dataset to inspect their ability to capture motion patterns, and (3) UCF-101 to inspect the applicability of transfer learning. We have evaluated the run-time performance of each model on a single Titan XP GPU and a Jetson TX2 embedded system. The results of this study show that these models can be utilized for different types of real-world applications since they provide real-time performance with considerable accuracies and memory usage. Our analysis on different complexity levels shows that the resource efficient 3D CNNs should not be designed too shallow or narrow in order to save complexity. The codes and pretrained models used in this work are publicly available.

Results

TaskDatasetMetricValueModel
Activity RecognitionJester (Gesture Recognition)Val90.773D-SqueezeNet
Activity RecognitionJester (Gesture Recognition)Val86.913D-ShuffleNetV2 0.25x
Activity RecognitionJester (Gesture Recognition)Val86.433D-MobileNetV2 0.2x
Activity RecognitionUCF1013-fold Accuracy74.943D-SqueezeNet
Activity RecognitionUCF1013-fold Accuracy56.523D-ShuffleNetV2 0.25x
Activity RecognitionUCF1013-fold Accuracy55.563D-MobileNetV2 0.2x
Action RecognitionJester (Gesture Recognition)Val90.773D-SqueezeNet
Action RecognitionJester (Gesture Recognition)Val86.913D-ShuffleNetV2 0.25x
Action RecognitionJester (Gesture Recognition)Val86.433D-MobileNetV2 0.2x
Action RecognitionUCF1013-fold Accuracy74.943D-SqueezeNet
Action RecognitionUCF1013-fold Accuracy56.523D-ShuffleNetV2 0.25x
Action RecognitionUCF1013-fold Accuracy55.563D-MobileNetV2 0.2x
Action Recognition In VideosJester (Gesture Recognition)Val90.773D-SqueezeNet
Action Recognition In VideosJester (Gesture Recognition)Val86.913D-ShuffleNetV2 0.25x
Action Recognition In VideosJester (Gesture Recognition)Val86.433D-MobileNetV2 0.2x
Action Recognition In VideosUCF1013-fold Accuracy74.943D-SqueezeNet
Action Recognition In VideosUCF1013-fold Accuracy56.523D-ShuffleNetV2 0.25x
Action Recognition In VideosUCF1013-fold Accuracy55.563D-MobileNetV2 0.2x

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16Robust-Multi-Task Gradient Boosting2025-07-15Calibrated and Robust Foundation Models for Vision-Language and Medical Image Tasks Under Distribution Shift2025-07-12The Bayesian Approach to Continual Learning: An Overview2025-07-11Contrastive and Transfer Learning for Effective Audio Fingerprinting through a Real-World Evaluation Protocol2025-07-08