TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Vision Transformer with Attentive Pooling for Robust Facia...

Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition

Fanglei Xue, Qiangchang Wang, Zichang Tan, Zhongsong Ma, Guodong Guo

2022-12-11Facial Expression RecognitionFacial Expression Recognition (FER)
PaperPDFCode(official)

Abstract

Facial Expression Recognition (FER) in the wild is an extremely challenging task. Recently, some Vision Transformers (ViT) have been explored for FER, but most of them perform inferiorly compared to Convolutional Neural Networks (CNN). This is mainly because the new proposed modules are difficult to converge well from scratch due to lacking inductive bias and easy to focus on the occlusion and noisy areas. TransFER, a representative transformer-based method for FER, alleviates this with multi-branch attention dropping but brings excessive computations. On the contrary, we present two attentive pooling (AP) modules to pool noisy features directly. The AP modules include Attentive Patch Pooling (APP) and Attentive Token Pooling (ATP). They aim to guide the model to emphasize the most discriminative features while reducing the impacts of less relevant features. The proposed APP is employed to select the most informative patches on CNN features, and ATP discards unimportant tokens in ViT. Being simple to implement and without learnable parameters, the APP and ATP intuitively reduce the computational cost while boosting the performance by ONLY pursuing the most discriminative features. Qualitative results demonstrate the motivations and effectiveness of our attentive poolings. Besides, quantitative results on six in-the-wild datasets outperform other state-of-the-art methods.

Results

TaskDatasetMetricValueModel
Facial Recognition and ModellingRAF-DBOverall Accuracy91.98APViT
Face ReconstructionRAF-DBOverall Accuracy91.98APViT
Facial Expression Recognition (FER)RAF-DBOverall Accuracy91.98APViT
3DRAF-DBOverall Accuracy91.98APViT
3D Face ModellingRAF-DBOverall Accuracy91.98APViT
3D Face ReconstructionRAF-DBOverall Accuracy91.98APViT

Related Papers

Multimodal Prompt Alignment for Facial Expression Recognition2025-06-26Enhancing Ambiguous Dynamic Facial Expression Recognition with Soft Label-based Data Augmentation2025-06-25Using Vision Language Models to Detect Students' Academic Emotion through Facial Expressions2025-06-12EfficientFER: EfficientNetv2 Based Deep Learning Approach for Facial Expression Recognition2025-06-02TKFNet: Learning Texture Key Factor Driven Feature for Facial Expression Recognition2025-05-15Unsupervised Multiview Contrastive Language-Image Joint Learning with Pseudo-Labeled Prompts Via Vision-Language Model for 3D/4D Facial Expression Recognition2025-05-14Achieving 3D Attention via Triplet Squeeze and Excitation Block2025-05-09Some Optimizers are More Equal: Understanding the Role of Optimizers in Group Fairness2025-04-21