TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Generative Action Description Prompts for Skeleton-based A...

Generative Action Description Prompts for Skeleton-based Action Recognition

Wangmeng Xiang, Chao Li, Yuxuan Zhou, Biao Wang, Lei Zhang

2022-08-10ICCV 2023 1Representation LearningSkeleton Based Action RecognitionAction RecognitionLanguage Modelling
PaperPDFCode(official)CodeCode(official)

Abstract

Skeleton-based action recognition has recently received considerable attention. Current approaches to skeleton-based action recognition are typically formulated as one-hot classification tasks and do not fully exploit the semantic relations between actions. For example, "make victory sign" and "thumb up" are two actions of hand gestures, whose major difference lies in the movement of hands. This information is agnostic from the categorical one-hot encoding of action classes but could be unveiled from the action description. Therefore, utilizing action description in training could potentially benefit representation learning. In this work, we propose a Generative Action-description Prompts (GAP) approach for skeleton-based action recognition. More specifically, we employ a pre-trained large-scale language model as the knowledge engine to automatically generate text descriptions for body parts movements of actions, and propose a multi-modal training scheme by utilizing the text encoder to generate feature vectors for different body parts and supervise the skeleton encoder for action representation learning. Experiments show that our proposed GAP method achieves noticeable improvements over various baseline models without extra computation cost at inference. GAP achieves new state-of-the-arts on popular skeleton-based action recognition benchmarks, including NTU RGB+D, NTU RGB+D 120 and NW-UCLA. The source code is available at https://github.com/MartinXM/GAP.

Results

TaskDatasetMetricValueModel
VideoNTU RGB+D 120Accuracy (Cross-Setup)91.1LST
VideoNTU RGB+D 120Accuracy (Cross-Subject)89.9LST
VideoNTU RGB+D 120Ensembled Modalities4LST
VideoN-UCLAAccuracy97.2LST
VideoNTU RGB+DAccuracy (CS)92.9LST
VideoNTU RGB+DAccuracy (CV)97LST
VideoNTU RGB+DEnsembled Modalities4LST
Temporal Action LocalizationNTU RGB+D 120Accuracy (Cross-Setup)91.1LST
Temporal Action LocalizationNTU RGB+D 120Accuracy (Cross-Subject)89.9LST
Temporal Action LocalizationNTU RGB+D 120Ensembled Modalities4LST
Temporal Action LocalizationN-UCLAAccuracy97.2LST
Temporal Action LocalizationNTU RGB+DAccuracy (CS)92.9LST
Temporal Action LocalizationNTU RGB+DAccuracy (CV)97LST
Temporal Action LocalizationNTU RGB+DEnsembled Modalities4LST
Zero-Shot LearningNTU RGB+D 120Accuracy (Cross-Setup)91.1LST
Zero-Shot LearningNTU RGB+D 120Accuracy (Cross-Subject)89.9LST
Zero-Shot LearningNTU RGB+D 120Ensembled Modalities4LST
Zero-Shot LearningN-UCLAAccuracy97.2LST
Zero-Shot LearningNTU RGB+DAccuracy (CS)92.9LST
Zero-Shot LearningNTU RGB+DAccuracy (CV)97LST
Zero-Shot LearningNTU RGB+DEnsembled Modalities4LST
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.1LST
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)89.9LST
Activity RecognitionNTU RGB+D 120Ensembled Modalities4LST
Activity RecognitionN-UCLAAccuracy97.2LST
Activity RecognitionNTU RGB+DAccuracy (CS)92.9LST
Activity RecognitionNTU RGB+DAccuracy (CV)97LST
Activity RecognitionNTU RGB+DEnsembled Modalities4LST
Action LocalizationNTU RGB+D 120Accuracy (Cross-Setup)91.1LST
Action LocalizationNTU RGB+D 120Accuracy (Cross-Subject)89.9LST
Action LocalizationNTU RGB+D 120Ensembled Modalities4LST
Action LocalizationN-UCLAAccuracy97.2LST
Action LocalizationNTU RGB+DAccuracy (CS)92.9LST
Action LocalizationNTU RGB+DAccuracy (CV)97LST
Action LocalizationNTU RGB+DEnsembled Modalities4LST
Action DetectionNTU RGB+D 120Accuracy (Cross-Setup)91.1LST
Action DetectionNTU RGB+D 120Accuracy (Cross-Subject)89.9LST
Action DetectionNTU RGB+D 120Ensembled Modalities4LST
Action DetectionN-UCLAAccuracy97.2LST
Action DetectionNTU RGB+DAccuracy (CS)92.9LST
Action DetectionNTU RGB+DAccuracy (CV)97LST
Action DetectionNTU RGB+DEnsembled Modalities4LST
3D Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.1LST
3D Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)89.9LST
3D Action RecognitionNTU RGB+D 120Ensembled Modalities4LST
3D Action RecognitionN-UCLAAccuracy97.2LST
3D Action RecognitionNTU RGB+DAccuracy (CS)92.9LST
3D Action RecognitionNTU RGB+DAccuracy (CV)97LST
3D Action RecognitionNTU RGB+DEnsembled Modalities4LST
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.1LST
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)89.9LST
Action RecognitionNTU RGB+D 120Ensembled Modalities4LST
Action RecognitionN-UCLAAccuracy97.2LST
Action RecognitionNTU RGB+DAccuracy (CS)92.9LST
Action RecognitionNTU RGB+DAccuracy (CV)97LST
Action RecognitionNTU RGB+DEnsembled Modalities4LST

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17