Yuedong Chen, Jian-Feng Wang, Shikai Chen, Zhongchao shi, Jianfei Cai
Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-the-art approaches.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Facial Recognition and Modelling | AffectNet | Accuracy (7 emotion) | 61.52 | Facial Motion Prior Network |
| Face Reconstruction | AffectNet | Accuracy (7 emotion) | 61.52 | Facial Motion Prior Network |
| Facial Expression Recognition (FER) | AffectNet | Accuracy (7 emotion) | 61.52 | Facial Motion Prior Network |
| 3D | AffectNet | Accuracy (7 emotion) | 61.52 | Facial Motion Prior Network |
| 3D Face Modelling | AffectNet | Accuracy (7 emotion) | 61.52 | Facial Motion Prior Network |
| 3D Face Reconstruction | AffectNet | Accuracy (7 emotion) | 61.52 | Facial Motion Prior Network |