TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FineParser: A Fine-grained Spatio-temporal Action Parser f...

FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment

Jinglin Xu, Sibo Yin, Guohao Zhao, Zishuo Wang, Yuxin Peng

2024-05-11CVPR 2024 1Action Quality AssessmentAction Understanding
PaperPDFCode(official)

Abstract

Existing action quality assessment (AQA) methods mainly learn deep representations at the video level for scoring diverse actions. Due to the lack of a fine-grained understanding of actions in videos, they harshly suffer from low credibility and interpretability, thus insufficient for stringent applications, such as Olympic diving events. We argue that a fine-grained understanding of actions requires the model to perceive and parse actions in both time and space, which is also the key to the credibility and interpretability of the AQA technique. Based on this insight, we propose a new fine-grained spatial-temporal action parser named \textbf{FineParser}. It learns human-centric foreground action representations by focusing on target action regions within each frame and exploiting their fine-grained alignments in time and space to minimize the impact of invalid backgrounds during the assessment. In addition, we construct fine-grained annotations of human-centric foreground action masks for the FineDiving dataset, called \textbf{FineDiving-HM}. With refined annotations on diverse target action procedures, FineDiving-HM can promote the development of real-world AQA systems. Through extensive experiments, we demonstrate the effectiveness of FineParser, which outperforms state-of-the-art methods while supporting more tasks of fine-grained action understanding. Data and code are available at \url{https://github.com/PKU-ICST-MIPL/FineParser_CVPR2024}.

Results

TaskDatasetMetricValueModel
Action Quality AssessmentFineDivingRL2(*100)0.2602FineParser
Action Quality AssessmentFineDivingSpearman Correlation0.9435FineParser
Action Quality AssessmentMTL-AQARL2(*100)0.241FineParser
Action Quality AssessmentMTL-AQASpearman Correlation95.85FineParser

Related Papers

LLaVA-Pose: Enhancing Human Pose and Action Understanding via Keypoint-Integrated Instruction Tuning2025-06-26PHI: Bridging Domain Shift in Long-Term Action Quality Assessment via Progressive Hierarchical Instruction2025-05-26The Role of Video Generation in Enhancing Data-Limited Action Understanding2025-05-26PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition2025-04-17F$^3$Set: Towards Analyzing Fast, Frequent, and Fine-grained Events from Videos2025-04-11RoboAct-CLIP: Video-Driven Pre-training of Atomic Action Understanding for Robotics2025-04-02FineCausal: A Causal-Based Framework for Interpretable Fine-Grained Action Quality Assessment2025-03-31Can DeepSeek Reason Like a Surgeon? An Empirical Evaluation for Vision-Language Understanding in Robotic-Assisted Surgery2025-03-29