Chengpeng Wang, Li Chen, Lili Wang, Zhaofan Li, Xuebin Lv
Facial expression recognition faces challenges where labeled significant features in datasets are mixed with unlabeled redundant ones. In this paper, we introduce Cross Similarity Attention (CSA) to mine richer intrinsic information from image pairs, overcoming a limitation when the Scaled Dot-Product Attention of ViT is directly applied to calculate the similarity between two different images. Based on CSA, we simultaneously minimize intra-class differences and maximize inter-class differences at the fine-grained feature level through interactions among multiple branches. Contrastive residual distillation is utilized to transfer the information learned in the cross module back to the base network. We ingeniously design a four-branch centrally symmetric network, named Quadruplet Cross Similarity (QCS), which alleviates gradient conflicts arising from the cross module and achieves balanced and stable training. It can adaptively extract discriminative features while isolating redundant ones. The cross-attention modules exist during training, and only one base branch is retained during inference, resulting in no increase in inference time. Extensive experiments show that our proposed method achieves state-of-the-art performance on several FER datasets.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Facial Recognition and Modelling | FER+ | Accuracy | 91.85 | QCS |
| Facial Recognition and Modelling | RAF-DB | Overall Accuracy | 93.02 | QCS |
| Facial Recognition and Modelling | AffectNet | Accuracy (7 emotion) | 67.94 | QCS |
| Facial Recognition and Modelling | AffectNet | Accuracy (8 emotion) | 64.4 | QCS |
| Face Reconstruction | FER+ | Accuracy | 91.85 | QCS |
| Face Reconstruction | RAF-DB | Overall Accuracy | 93.02 | QCS |
| Face Reconstruction | AffectNet | Accuracy (7 emotion) | 67.94 | QCS |
| Face Reconstruction | AffectNet | Accuracy (8 emotion) | 64.4 | QCS |
| Facial Expression Recognition (FER) | FER+ | Accuracy | 91.85 | QCS |
| Facial Expression Recognition (FER) | RAF-DB | Overall Accuracy | 93.02 | QCS |
| Facial Expression Recognition (FER) | AffectNet | Accuracy (7 emotion) | 67.94 | QCS |
| Facial Expression Recognition (FER) | AffectNet | Accuracy (8 emotion) | 64.4 | QCS |
| 3D | FER+ | Accuracy | 91.85 | QCS |
| 3D | RAF-DB | Overall Accuracy | 93.02 | QCS |
| 3D | AffectNet | Accuracy (7 emotion) | 67.94 | QCS |
| 3D | AffectNet | Accuracy (8 emotion) | 64.4 | QCS |
| 3D Face Modelling | FER+ | Accuracy | 91.85 | QCS |
| 3D Face Modelling | RAF-DB | Overall Accuracy | 93.02 | QCS |
| 3D Face Modelling | AffectNet | Accuracy (7 emotion) | 67.94 | QCS |
| 3D Face Modelling | AffectNet | Accuracy (8 emotion) | 64.4 | QCS |
| 3D Face Reconstruction | FER+ | Accuracy | 91.85 | QCS |
| 3D Face Reconstruction | RAF-DB | Overall Accuracy | 93.02 | QCS |
| 3D Face Reconstruction | AffectNet | Accuracy (7 emotion) | 67.94 | QCS |
| 3D Face Reconstruction | AffectNet | Accuracy (8 emotion) | 64.4 | QCS |