TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/VPN++: Rethinking Video-Pose embeddings for understanding ...

VPN++: Rethinking Video-Pose embeddings for understanding Activities of Daily Living

Srijan Das, Rui Dai, Di Yang, Francois Bremond

2021-05-17Action ClassificationSkeleton Based Action RecognitionAction Recognition
PaperPDFCode(official)

Abstract

Many attempts have been made towards combining RGB and 3D poses for the recognition of Activities of Daily Living (ADL). ADL may look very similar and often necessitate to model fine-grained details to distinguish them. Because the recent 3D ConvNets are too rigid to capture the subtle visual patterns across an action, this research direction is dominated by methods combining RGB and 3D Poses. But the cost of computing 3D poses from RGB stream is high in the absence of appropriate sensors. This limits the usage of aforementioned approaches in real-world applications requiring low latency. Then, how to best take advantage of 3D Poses for recognizing ADL? To this end, we propose an extension of a pose driven attention mechanism: Video-Pose Network (VPN), exploring two distinct directions. One is to transfer the Pose knowledge into RGB through a feature-level distillation and the other towards mimicking pose driven attention through an attention-level distillation. Finally, these two approaches are integrated into a single model, we call VPN++. We show that VPN++ is not only effective but also provides a high speed up and high resilience to noisy Poses. VPN++, with or without 3D Poses, outperforms the representative baselines on 4 public datasets. Code is available at https://github.com/srijandas07/vpnplusplus.

Results

TaskDatasetMetricValueModel
VideoN-UCLAAccuracy93.5VPN++ (RGB + Pose)
Temporal Action LocalizationN-UCLAAccuracy93.5VPN++ (RGB + Pose)
Zero-Shot LearningN-UCLAAccuracy93.5VPN++ (RGB + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)90.7VPN++ (RGB + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)92.5VPN++ (RGB + Pose)
Activity RecognitionN-UCLAAccuracy93.5VPN++ (RGB + Pose)
Action LocalizationN-UCLAAccuracy93.5VPN++ (RGB + Pose)
Action DetectionN-UCLAAccuracy93.5VPN++ (RGB + Pose)
3D Action RecognitionN-UCLAAccuracy93.5VPN++ (RGB + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)90.7VPN++ (RGB + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)92.5VPN++ (RGB + Pose)
Action RecognitionN-UCLAAccuracy93.5VPN++ (RGB + Pose)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22Active Multimodal Distillation for Few-shot Action Recognition2025-06-16