TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Cross-Modal Learning with 3D Deformable Attention for Acti...

Cross-Modal Learning with 3D Deformable Attention for Action Recognition

Sangwon Kim, Dasom Ahn, Byoung Chul Ko

2022-12-12ICCV 2023 1Action Recognition
PaperPDF

Abstract

An important challenge in vision-based action recognition is the embedding of spatiotemporal features with two or more heterogeneous modalities into a single feature. In this study, we propose a new 3D deformable transformer for action recognition with adaptive spatiotemporal receptive fields and a cross-modal learning scheme. The 3D deformable transformer consists of three attention modules: 3D deformability, local joint stride, and temporal stride attention. The two cross-modal tokens are input into the 3D deformable attention module to create a cross-attention token with a reflected spatiotemporal correlation. Local joint stride attention is applied to spatially combine attention and pose tokens. Temporal stride attention temporally reduces the number of input tokens in the attention module and supports temporal expression learning without the simultaneous use of all tokens. The deformable transformer iterates L-times and combines the last cross-modal token for classification. The proposed 3D deformable transformer was tested on the NTU60, NTU120, FineGYM, and PennAction datasets, and showed results better than or similar to pre-trained state-of-the-art methods even without a pre-training process. In addition, by visualizing important joints and correlations during action recognition through spatial joint and temporal stride attention, the possibility of achieving an explainable potential for action recognition is presented.

Results

TaskDatasetMetricValueModel
Activity RecognitionNTU RGB+DAccuracy (CS)94.33DA (RGB + Pose)
Activity RecognitionNTU RGB+DAccuracy (CV)97.93DA (RGB + Pose)
Activity RecognitionPenn ActionAccuracy99.73DA (RGB + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.43DA (RGB + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)90.53DA (RGB + Pose)
Action RecognitionNTU RGB+DAccuracy (CS)94.33DA (RGB + Pose)
Action RecognitionNTU RGB+DAccuracy (CV)97.93DA (RGB + Pose)
Action RecognitionPenn ActionAccuracy99.73DA (RGB + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.43DA (RGB + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)90.53DA (RGB + Pose)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22Active Multimodal Distillation for Few-shot Action Recognition2025-06-16