TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-scale Dynamic and Hierarchical Relationship Modeling...

Multi-scale Dynamic and Hierarchical Relationship Modeling for Facial Action Units Recognition

Zihan Wang, Siyang Song, Cheng Luo, Songhe Deng, Weicheng Xie, Linlin Shen

2024-04-09CVPR 2024 1Facial Action Unit Detection
PaperPDFCode(official)

Abstract

Human facial action units (AUs) are mutually related in a hierarchical manner, as not only they are associated with each other in both spatial and temporal domains but also AUs located in the same/close facial regions show stronger relationships than those of different facial regions. While none of existing approach thoroughly model such hierarchical inter-dependencies among AUs, this paper proposes to comprehensively model multi-scale AU-related dynamic and hierarchical spatio-temporal relationship among AUs for their occurrences recognition. Specifically, we first propose a novel multi-scale temporal differencing network with an adaptive weighting block to explicitly capture facial dynamics across frames at different spatial scales, which specifically considers the heterogeneity of range and magnitude in different AUs' activation. Then, a two-stage strategy is introduced to hierarchically model the relationship among AUs based on their spatial distribution (i.e., local and cross-region AU relationship modelling). Experimental results achieved on BP4D and DISFA show that our approach is the new state-of-the-art in the field of AU occurrence recognition. Our code is publicly available at https://github.com/CVI-SZU/MDHR.

Results

TaskDatasetMetricValueModel
Facial Recognition and ModellingDISFAAverage F166.2MDHRM
Facial Recognition and ModellingBP4DAverage F166.6MDHRD
Face ReconstructionDISFAAverage F166.2MDHRM
Face ReconstructionBP4DAverage F166.6MDHRD
3DDISFAAverage F166.2MDHRM
3DBP4DAverage F166.6MDHRD
3D Face ModellingDISFAAverage F166.2MDHRM
3D Face ModellingBP4DAverage F166.6MDHRD
3D Face ReconstructionDISFAAverage F166.2MDHRM
3D Face ReconstructionBP4DAverage F166.6MDHRD

Related Papers

FG 2025 TrustFAA: the First Workshop on Towards Trustworthy Facial Affect Analysis: Advancing Insights of Fairness, Explainability, and Safety (TrustFAA)2025-06-05AU-TTT: Vision Test-Time Training model for Facial Action Unit Detection2025-03-30Decoupled Doubly Contrastive Learning for Cross Domain Facial Action Unit Detection2025-03-12Facial Action Unit Detection by Adaptively Constraining Self-Attention and Causally Deconfounding Sample2024-10-02Towards Unified Facial Action Unit Recognition Framework by Large Language Models2024-09-13Towards End-to-End Explainable Facial Action Unit Recognition via Vision-Language Joint Learning2024-08-01Norface: Improving Facial Expression Analysis by Identity Normalization2024-07-22Representation Learning and Identity Adversarial Training for Facial Behavior Understanding2024-07-15