TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ARBEx: Attentive Feature Extraction with Reliability Balan...

ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning

Azmine Toushik Wasi, Karlo Ĺ erbetar, Raima Islam, Taki Hasan Rafi, Dong-Kyu Chae

2023-05-02Facial Emotion RecognitionFacial Expression RecognitionFacial Expression Recognition (FER)
PaperPDFCode(official)

Abstract

In this paper, we introduce a framework ARBEx, a novel attentive feature extraction framework driven by Vision Transformer with reliability balancing to cope against poor class distributions, bias, and uncertainty in the facial expression learning (FEL) task. We reinforce several data pre-processing and refinement methods along with a window-based cross-attention ViT to squeeze the best of the data. We also employ learnable anchor points in the embedding space with label distributions and multi-head self-attention mechanism to optimize performance against weak predictions with reliability balancing, which is a strategy that leverages anchor points, attention scores, and confidence values to enhance the resilience of label predictions. To ensure correct label classification and improve the models' discriminative power, we introduce anchor loss, which encourages large margins between anchor points. Additionally, the multi-head self-attention mechanism, which is also trainable, plays an integral role in identifying accurate labels. This approach provides critical elements for improving the reliability of predictions and has a substantial positive effect on final prediction capabilities. Our adaptive model can be integrated with any deep neural network to forestall challenges in various recognition tasks. Our strategy outperforms current state-of-the-art methodologies, according to extensive experiments conducted in a variety of contexts.

Results

TaskDatasetMetricValueModel
Emotion RecognitionJAFFEAccuracy96.67ARBEx
Facial Expression RecognitionRAF-DBOverall Accuracy92.47ARBEx
Facial Expression RecognitionAff-Wild2Accuracy72.48ARBEx
Facial Expression RecognitionFER+Accuracy93.09ARBEx

Related Papers

Multimodal Prompt Alignment for Facial Expression Recognition2025-06-26Enhancing Ambiguous Dynamic Facial Expression Recognition with Soft Label-based Data Augmentation2025-06-25Reading Smiles: Proxy Bias in Foundation Models for Facial Emotion Recognition2025-06-23Using Vision Language Models to Detect Students' Academic Emotion through Facial Expressions2025-06-12EfficientFER: EfficientNetv2 Based Deep Learning Approach for Facial Expression Recognition2025-06-02TKFNet: Learning Texture Key Factor Driven Feature for Facial Expression Recognition2025-05-15Unsupervised Multiview Contrastive Language-Image Joint Learning with Pseudo-Labeled Prompts Via Vision-Language Model for 3D/4D Facial Expression Recognition2025-05-14Achieving 3D Attention via Triplet Squeeze and Excitation Block2025-05-09