TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Fine-Grained Side Information Guided Dual-Prompts for Zero...

Fine-Grained Side Information Guided Dual-Prompts for Zero-Shot Skeleton Action Recognition

Yang Chen, Jingcai Guo, Tian He, Ling Wang

2024-04-11AttributeZero Shot Skeletal Action RecognitionZero-Shot Action RecognitionAction Recognition
PaperPDF

Abstract

Skeleton-based zero-shot action recognition aims to recognize unknown human actions based on the learned priors of the known skeleton-based actions and a semantic descriptor space shared by both known and unknown categories. However, previous works focus on establishing the bridges between the known skeleton representation space and semantic descriptions space at the coarse-grained level for recognizing unknown action categories, ignoring the fine-grained alignment of these two spaces, resulting in suboptimal performance in distinguishing high-similarity action categories. To address these challenges, we propose a novel method via Side information and dual-prompts learning for skeleton-based zero-shot action recognition (STAR) at the fine-grained level. Specifically, 1) we decompose the skeleton into several parts based on its topology structure and introduce the side information concerning multi-part descriptions of human body movements for alignment between the skeleton and the semantic space at the fine-grained level; 2) we design the visual-attribute and semantic-part prompts to improve the intra-class compactness within the skeleton space and inter-class separability within the semantic space, respectively, to distinguish the high-similarity actions. Extensive experiments show that our method achieves state-of-the-art performance in ZSL and GZSL settings on NTU RGB+D, NTU RGB+D 120, and PKU-MMD datasets.

Results

TaskDatasetMetricValueModel
VideoNTU RGB+D 120Accuracy (10 unseen classes)63.3STAR
VideoNTU RGB+D 120Accuracy (24 unseen classes)44.3STAR
VideoPKU-MMDRandom Split Accuracy70.6STAR
VideoNTU RGB+DAccuracy (12 unseen classes)45.1STAR
VideoNTU RGB+DAccuracy (5 unseen classes)81.4STAR
VideoNTU RGB+DRandom Split Accuracy77.5STAR
Temporal Action LocalizationNTU RGB+D 120Accuracy (10 unseen classes)63.3STAR
Temporal Action LocalizationNTU RGB+D 120Accuracy (24 unseen classes)44.3STAR
Temporal Action LocalizationPKU-MMDRandom Split Accuracy70.6STAR
Temporal Action LocalizationNTU RGB+DAccuracy (12 unseen classes)45.1STAR
Temporal Action LocalizationNTU RGB+DAccuracy (5 unseen classes)81.4STAR
Temporal Action LocalizationNTU RGB+DRandom Split Accuracy77.5STAR
Zero-Shot LearningNTU RGB+D 120Accuracy (10 unseen classes)63.3STAR
Zero-Shot LearningNTU RGB+D 120Accuracy (24 unseen classes)44.3STAR
Zero-Shot LearningPKU-MMDRandom Split Accuracy70.6STAR
Zero-Shot LearningNTU RGB+DAccuracy (12 unseen classes)45.1STAR
Zero-Shot LearningNTU RGB+DAccuracy (5 unseen classes)81.4STAR
Zero-Shot LearningNTU RGB+DRandom Split Accuracy77.5STAR
Activity RecognitionNTU RGB+D 120Accuracy (10 unseen classes)63.3STAR
Activity RecognitionNTU RGB+D 120Accuracy (24 unseen classes)44.3STAR
Activity RecognitionPKU-MMDRandom Split Accuracy70.6STAR
Activity RecognitionNTU RGB+DAccuracy (12 unseen classes)45.1STAR
Activity RecognitionNTU RGB+DAccuracy (5 unseen classes)81.4STAR
Activity RecognitionNTU RGB+DRandom Split Accuracy77.5STAR
Action LocalizationNTU RGB+D 120Accuracy (10 unseen classes)63.3STAR
Action LocalizationNTU RGB+D 120Accuracy (24 unseen classes)44.3STAR
Action LocalizationPKU-MMDRandom Split Accuracy70.6STAR
Action LocalizationNTU RGB+DAccuracy (12 unseen classes)45.1STAR
Action LocalizationNTU RGB+DAccuracy (5 unseen classes)81.4STAR
Action LocalizationNTU RGB+DRandom Split Accuracy77.5STAR
3D Action RecognitionNTU RGB+D 120Accuracy (10 unseen classes)63.3STAR
3D Action RecognitionNTU RGB+D 120Accuracy (24 unseen classes)44.3STAR
3D Action RecognitionPKU-MMDRandom Split Accuracy70.6STAR
3D Action RecognitionNTU RGB+DAccuracy (12 unseen classes)45.1STAR
3D Action RecognitionNTU RGB+DAccuracy (5 unseen classes)81.4STAR
3D Action RecognitionNTU RGB+DRandom Split Accuracy77.5STAR
Action RecognitionNTU RGB+D 120Accuracy (10 unseen classes)63.3STAR
Action RecognitionNTU RGB+D 120Accuracy (24 unseen classes)44.3STAR
Action RecognitionPKU-MMDRandom Split Accuracy70.6STAR
Action RecognitionNTU RGB+DAccuracy (12 unseen classes)45.1STAR
Action RecognitionNTU RGB+DAccuracy (5 unseen classes)81.4STAR
Action RecognitionNTU RGB+DRandom Split Accuracy77.5STAR

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15COLIBRI Fuzzy Model: Color Linguistic-Based Representation and Interpretation2025-07-15Ref-Long: Benchmarking the Long-context Referencing Capability of Long-context Language Models2025-07-13Model Parallelism With Subnetwork Data Parallelism2025-07-11Bradley-Terry and Multi-Objective Reward Modeling Are Complementary2025-07-10