TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/VPN: Learning Video-Pose Embedding for Activities of Daily...

VPN: Learning Video-Pose Embedding for Activities of Daily Living

Srijan Das, Saurav Sharma, Rui Dai, Francois Bremond, Monique Thonnat

2020-07-06ECCV 2020 8Action ClassificationSkeleton Based Action RecognitionHuman-Object Interaction DetectionAction Recognition
PaperPDFCode(official)

Abstract

In this paper, we focus on the spatio-temporal aspect of recognizing Activities of Daily Living (ADL). ADL have two specific properties (i) subtle spatio-temporal patterns and (ii) similar visual patterns varying with time. Therefore, ADL may look very similar and often necessitate to look at their fine-grained details to distinguish them. Because the recent spatio-temporal 3D ConvNets are too rigid to capture the subtle visual patterns across an action, we propose a novel Video-Pose Network: VPN. The 2 key components of this VPN are a spatial embedding and an attention network. The spatial embedding projects the 3D poses and RGB cues in a common semantic space. This enables the action recognition framework to learn better spatio-temporal features exploiting both modalities. In order to discriminate similar actions, the attention network provides two functionalities - (i) an end-to-end learnable pose backbone exploiting the topology of human body, and (ii) a coupler to provide joint spatio-temporal attention weights across a video. Experiments show that VPN outperforms the state-of-the-art results for action classification on a large scale human activity dataset: NTU-RGB+D 120, its subset NTU-RGB+D 60, a real-world challenging human activity dataset: Toyota Smarthome and a small scale human-object interaction dataset Northwestern UCLA.

Results

TaskDatasetMetricValueModel
VideoNTU RGB+D 120Accuracy (Cross-Setup)87.8VPN
VideoNTU RGB+D 120Accuracy (Cross-Subject)86.3VPN
VideoN-UCLAAccuracy93.5VPN (RGB + Pose)
VideoToyota Smarthome datasetCS60.8VPN (RGB + Pose)
VideoToyota Smarthome datasetCV143.8VPN (RGB + Pose)
VideoToyota Smarthome datasetCV253.5VPN (RGB + Pose)
Temporal Action LocalizationNTU RGB+D 120Accuracy (Cross-Setup)87.8VPN
Temporal Action LocalizationNTU RGB+D 120Accuracy (Cross-Subject)86.3VPN
Temporal Action LocalizationN-UCLAAccuracy93.5VPN (RGB + Pose)
Zero-Shot LearningNTU RGB+D 120Accuracy (Cross-Setup)87.8VPN
Zero-Shot LearningNTU RGB+D 120Accuracy (Cross-Subject)86.3VPN
Zero-Shot LearningN-UCLAAccuracy93.5VPN (RGB + Pose)
Activity RecognitionNTU RGB+DAccuracy (CS)95.5VPN (RGB + Pose)
Activity RecognitionNTU RGB+DAccuracy (CV)98VPN (RGB + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)86.3VPN (RGB + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)87.8VPN (RGB + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)87.8VPN
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)86.3VPN
Activity RecognitionN-UCLAAccuracy93.5VPN (RGB + Pose)
Action LocalizationNTU RGB+D 120Accuracy (Cross-Setup)87.8VPN
Action LocalizationNTU RGB+D 120Accuracy (Cross-Subject)86.3VPN
Action LocalizationN-UCLAAccuracy93.5VPN (RGB + Pose)
Action DetectionNTU RGB+D 120Accuracy (Cross-Setup)87.8VPN
Action DetectionNTU RGB+D 120Accuracy (Cross-Subject)86.3VPN
Action DetectionN-UCLAAccuracy93.5VPN (RGB + Pose)
3D Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)87.8VPN
3D Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)86.3VPN
3D Action RecognitionN-UCLAAccuracy93.5VPN (RGB + Pose)
Action RecognitionNTU RGB+DAccuracy (CS)95.5VPN (RGB + Pose)
Action RecognitionNTU RGB+DAccuracy (CV)98VPN (RGB + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)86.3VPN (RGB + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)87.8VPN (RGB + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)87.8VPN
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)86.3VPN
Action RecognitionN-UCLAAccuracy93.5VPN (RGB + Pose)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RoHOI: Robustness Benchmark for Human-Object Interaction Detection2025-07-12Bilateral Collaboration with Large Vision-Language Models for Open Vocabulary Human-Object Interaction Detection2025-07-09Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01VolumetricSMPL: A Neural Volumetric Body Model for Efficient Interactions, Contacts, and Collisions2025-06-29EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25