TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/QCS: Feature Refining from Quadruplet Cross Similarity for...

QCS: Feature Refining from Quadruplet Cross Similarity for Facial Expression Recognition

Chengpeng Wang, Li Chen, Lili Wang, Zhaofan Li, Xuebin Lv

2024-11-04Facial Expression RecognitionFacial Expression Recognition (FER)
PaperPDFCode(official)

Abstract

Facial expression recognition faces challenges where labeled significant features in datasets are mixed with unlabeled redundant ones. In this paper, we introduce Cross Similarity Attention (CSA) to mine richer intrinsic information from image pairs, overcoming a limitation when the Scaled Dot-Product Attention of ViT is directly applied to calculate the similarity between two different images. Based on CSA, we simultaneously minimize intra-class differences and maximize inter-class differences at the fine-grained feature level through interactions among multiple branches. Contrastive residual distillation is utilized to transfer the information learned in the cross module back to the base network. We ingeniously design a four-branch centrally symmetric network, named Quadruplet Cross Similarity (QCS), which alleviates gradient conflicts arising from the cross module and achieves balanced and stable training. It can adaptively extract discriminative features while isolating redundant ones. The cross-attention modules exist during training, and only one base branch is retained during inference, resulting in no increase in inference time. Extensive experiments show that our proposed method achieves state-of-the-art performance on several FER datasets.

Results

TaskDatasetMetricValueModel
Facial Recognition and ModellingFER+Accuracy91.85QCS
Facial Recognition and ModellingRAF-DBOverall Accuracy93.02QCS
Facial Recognition and ModellingAffectNetAccuracy (7 emotion)67.94QCS
Facial Recognition and ModellingAffectNetAccuracy (8 emotion)64.4QCS
Face ReconstructionFER+Accuracy91.85QCS
Face ReconstructionRAF-DBOverall Accuracy93.02QCS
Face ReconstructionAffectNetAccuracy (7 emotion)67.94QCS
Face ReconstructionAffectNetAccuracy (8 emotion)64.4QCS
Facial Expression Recognition (FER)FER+Accuracy91.85QCS
Facial Expression Recognition (FER)RAF-DBOverall Accuracy93.02QCS
Facial Expression Recognition (FER)AffectNetAccuracy (7 emotion)67.94QCS
Facial Expression Recognition (FER)AffectNetAccuracy (8 emotion)64.4QCS
3DFER+Accuracy91.85QCS
3DRAF-DBOverall Accuracy93.02QCS
3DAffectNetAccuracy (7 emotion)67.94QCS
3DAffectNetAccuracy (8 emotion)64.4QCS
3D Face ModellingFER+Accuracy91.85QCS
3D Face ModellingRAF-DBOverall Accuracy93.02QCS
3D Face ModellingAffectNetAccuracy (7 emotion)67.94QCS
3D Face ModellingAffectNetAccuracy (8 emotion)64.4QCS
3D Face ReconstructionFER+Accuracy91.85QCS
3D Face ReconstructionRAF-DBOverall Accuracy93.02QCS
3D Face ReconstructionAffectNetAccuracy (7 emotion)67.94QCS
3D Face ReconstructionAffectNetAccuracy (8 emotion)64.4QCS

Related Papers

Multimodal Prompt Alignment for Facial Expression Recognition2025-06-26Enhancing Ambiguous Dynamic Facial Expression Recognition with Soft Label-based Data Augmentation2025-06-25Using Vision Language Models to Detect Students' Academic Emotion through Facial Expressions2025-06-12EfficientFER: EfficientNetv2 Based Deep Learning Approach for Facial Expression Recognition2025-06-02TKFNet: Learning Texture Key Factor Driven Feature for Facial Expression Recognition2025-05-15Unsupervised Multiview Contrastive Language-Image Joint Learning with Pseudo-Labeled Prompts Via Vision-Language Model for 3D/4D Facial Expression Recognition2025-05-14Achieving 3D Attention via Triplet Squeeze and Excitation Block2025-05-09Some Optimizers are More Equal: Understanding the Role of Optimizers in Group Fairness2025-04-21