TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/EPAM-Net: An Efficient Pose-driven Attention-guided Multim...

EPAM-Net: An Efficient Pose-driven Attention-guided Multimodal Network for Video Action Recognition

Ahmed Abdelkawy, Asem Ali, Aly Farag

2024-08-10Action ClassificationAction RecognitionAction Recognition In VideosTemporal Action Localization
PaperPDFCode(official)

Abstract

Existing multimodal-based human action recognition approaches are either computationally expensive, which limits their applicability in real-time scenarios, or fail to exploit the spatial temporal information of multiple data modalities. In this work, we present an efficient pose-driven attention-guided multimodal network (EPAM-Net) for action recognition in videos. Specifically, we adapted X3D networks for both RGB and pose streams to capture spatio-temporal features from RGB videos and their skeleton sequences. Then skeleton features are utilized to help the visual network stream focusing on key frames and their salient spatial regions using a spatial temporal attention block. Finally, the scores of the two streams of the proposed network are fused for final classification. The experimental results show that our method achieves competitive performance on NTU-D 60 and NTU RGB-D 120 benchmark datasets. Moreover, our model provides a 6.2--9.9x reduction in FLOPs (floating-point operation, in number of multiply-adds) and a 9--9.6x reduction in the number of network parameters. The code will be available at https://github.com/ahmed-nady/Multimodal-Action-Recognition.

Results

TaskDatasetMetricValueModel
VideoToyota Smarthome datasetCS71.7EPAM-Net
VideoToyota Smarthome datasetCV267.8EPAM-Net
Activity RecognitionNTU RGB+DAccuracy (CS)96.1EPAM-Net
Activity RecognitionNTU RGB+DAccuracy (CV)99EPAM-Net
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)92.4EPAM-Net
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)94.3EPAM-Net
Activity RecognitionPKU-MMDX-Sub96.2EPAM-Net
Activity RecognitionPKU-MMDX-View98.4EPAM-Net
Action RecognitionNTU RGB+DAccuracy (CS)96.1EPAM-Net
Action RecognitionNTU RGB+DAccuracy (CV)99EPAM-Net
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)92.4EPAM-Net
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)94.3EPAM-Net
Action RecognitionPKU-MMDX-Sub96.2EPAM-Net
Action RecognitionPKU-MMDX-View98.4EPAM-Net
Action Recognition In VideosPKU-MMDX-Sub96.2EPAM-Net
Action Recognition In VideosPKU-MMDX-View98.4EPAM-Net

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22