TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/On the Utility of 3D Hand Poses for Action Recognition

On the Utility of 3D Hand Poses for Action Recognition

Md Salman Shamil, Dibyadip Chatterjee, Fadime Sener, Shugao Ma, Angela Yao

2024-03-143D Action RecognitionAction Recognition
PaperPDFCode(official)

Abstract

3D hand pose is an underexplored modality for action recognition. Poses are compact yet informative and can greatly benefit applications with limited compute budgets. However, poses alone offer an incomplete understanding of actions, as they cannot fully capture objects and environments with which humans interact. We propose HandFormer, a novel multimodal transformer, to efficiently model hand-object interactions. HandFormer combines 3D hand poses at a high temporal resolution for fine-grained motion modeling with sparsely sampled RGB frames for encoding scene semantics. Observing the unique characteristics of hand poses, we temporally factorize hand modeling and represent each joint by its short-term trajectories. This factorized pose representation combined with sparse RGB samples is remarkably efficient and highly accurate. Unimodal HandFormer with only hand poses outperforms existing skeleton-based methods at 5x fewer FLOPs. With RGB, we achieve new state-of-the-art performance on Assembly101 and H2O with significant improvements in egocentric action recognition.

Results

TaskDatasetMetricValueModel
VideoAssembly101Actions Top-141.06HandFormer-B/21
VideoAssembly101Object Top-151.17HandFormer-B/21
VideoAssembly101Verbs Top-169.23HandFormer-B/21
Temporal Action LocalizationAssembly101Actions Top-141.06HandFormer-B/21
Temporal Action LocalizationAssembly101Object Top-151.17HandFormer-B/21
Temporal Action LocalizationAssembly101Verbs Top-169.23HandFormer-B/21
Zero-Shot LearningAssembly101Actions Top-141.06HandFormer-B/21
Zero-Shot LearningAssembly101Object Top-151.17HandFormer-B/21
Zero-Shot LearningAssembly101Verbs Top-169.23HandFormer-B/21
Activity RecognitionH2O (2 Hands and Objects)Actions Top-193.39HandFormer-B/21x8
Activity RecognitionAssembly101Actions Top-141.06HandFormer-B/21
Activity RecognitionAssembly101Object Top-151.17HandFormer-B/21
Activity RecognitionAssembly101Verbs Top-169.23HandFormer-B/21
Action LocalizationAssembly101Actions Top-141.06HandFormer-B/21
Action LocalizationAssembly101Object Top-151.17HandFormer-B/21
Action LocalizationAssembly101Verbs Top-169.23HandFormer-B/21
3D Action RecognitionAssembly101Actions Top-141.06HandFormer-B/21
3D Action RecognitionAssembly101Object Top-151.17HandFormer-B/21
3D Action RecognitionAssembly101Verbs Top-169.23HandFormer-B/21
Action RecognitionH2O (2 Hands and Objects)Actions Top-193.39HandFormer-B/21x8
Action RecognitionAssembly101Actions Top-141.06HandFormer-B/21
Action RecognitionAssembly101Object Top-151.17HandFormer-B/21
Action RecognitionAssembly101Verbs Top-169.23HandFormer-B/21

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22Active Multimodal Distillation for Few-shot Action Recognition2025-06-16