TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DVANet: Disentangling View and Action Features for Multi-V...

DVANet: Disentangling View and Action Features for Multi-View Action Recognition

Nyle Siddiqui, Praveen Tirupattur, Mubarak Shah

2023-12-10Action RecognitionAction Recognition In Videos
PaperPDFCode(official)

Abstract

In this work, we present a novel approach to multi-view action recognition where we guide learned action representations to be separated from view-relevant information in a video. When trying to classify action instances captured from multiple viewpoints, there is a higher degree of difficulty due to the difference in background, occlusion, and visibility of the captured action from different camera angles. To tackle the various problems introduced in multi-view action recognition, we propose a novel configuration of learnable transformer decoder queries, in conjunction with two supervised contrastive losses, to enforce the learning of action features that are robust to shifts in viewpoints. Our disentangled feature learning occurs in two stages: the transformer decoder uses separate queries to separately learn action and view information, which are then further disentangled using our two contrastive losses. We show that our model and method of training significantly outperforms all other uni-modal models on four multi-view action recognition datasets: NTU RGB+D, NTU RGB+D 120, PKU-MMD, and N-UCLA. Compared to previous RGB works, we see maximal improvements of 1.5\%, 4.8\%, 2.2\%, and 4.8\% on each dataset, respectively.

Results

TaskDatasetMetricValueModel
Activity RecognitionN-UCLAAccuracy (Cross-Subject)94.4DVANet
Activity RecognitionN-UCLAAccuracy (Cross-View)96.5DVANet
Activity RecognitionNTU RGB+DAccuracy (CS)93.4DVANet (RGB only)
Activity RecognitionNTU RGB+DAccuracy (CV)98.1DVANet (RGB only)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)90.4DVANet (RGB only)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)91.6DVANet (RGB only)
Activity RecognitionPKU-MMDX-Sub95.8DVANet (RGB only)
Activity RecognitionPKU-MMDX-View95.2DVANet (RGB only)
Action RecognitionN-UCLAAccuracy (Cross-Subject)94.4DVANet
Action RecognitionN-UCLAAccuracy (Cross-View)96.5DVANet
Action RecognitionNTU RGB+DAccuracy (CS)93.4DVANet (RGB only)
Action RecognitionNTU RGB+DAccuracy (CV)98.1DVANet (RGB only)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)90.4DVANet (RGB only)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)91.6DVANet (RGB only)
Action RecognitionPKU-MMDX-Sub95.8DVANet (RGB only)
Action RecognitionPKU-MMDX-View95.2DVANet (RGB only)
Action Recognition In VideosPKU-MMDX-Sub95.8DVANet (RGB only)
Action Recognition In VideosPKU-MMDX-View95.2DVANet (RGB only)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22Active Multimodal Distillation for Few-shot Action Recognition2025-06-16