TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Adversarial GLUE: A Multi-Task Benchmark for Robustness Ev...

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li

2021-11-04Adversarial RobustnessNatural Language UnderstandingAdversarial Attack
PaperPDFCode

Abstract

Large-scale pre-trained language models have achieved tremendous success across a wide range of natural language understanding (NLU) tasks, even surpassing human performance. However, recent studies reveal that the robustness of these models can be challenged by carefully crafted textual adversarial examples. While several individual datasets have been proposed to evaluate model robustness, a principled and comprehensive benchmark is still missing. In this paper, we present Adversarial GLUE (AdvGLUE), a new multi-task benchmark to quantitatively and thoroughly explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks. In particular, we systematically apply 14 textual adversarial attack methods to GLUE tasks to construct AdvGLUE, which is further validated by humans for reliable annotations. Our findings are summarized as follows. (i) Most existing adversarial attack algorithms are prone to generating invalid or ambiguous adversarial examples, with around 90% of them either changing the original semantic meanings or misleading human annotators as well. Therefore, we perform a careful filtering process to curate a high-quality benchmark. (ii) All the language models and robust training methods we tested perform poorly on AdvGLUE, with scores lagging far behind the benign accuracy. We hope our work will motivate the development of new adversarial attacks that are more stealthy and semantic-preserving, as well as new robust language models against sophisticated adversarial attacks. AdvGLUE is available at https://adversarialglue.github.io.

Results

TaskDatasetMetricValueModel
Adversarial RobustnessAdvGLUEAccuracy0.6086DeBERTa (single model)
Adversarial RobustnessAdvGLUEAccuracy0.5922ALBERT (single model)
Adversarial RobustnessAdvGLUEAccuracy0.5682T5 (single model)
Adversarial RobustnessAdvGLUEAccuracy0.5371SMART_RoBERTa (single model)
Adversarial RobustnessAdvGLUEAccuracy0.5048FreeLB (single model)
Adversarial RobustnessAdvGLUEAccuracy0.5021RoBERTa (single model)
Adversarial RobustnessAdvGLUEAccuracy0.4603InfoBERT (single model)
Adversarial RobustnessAdvGLUEAccuracy0.4169ELECTRA (single model)
Adversarial RobustnessAdvGLUEAccuracy0.3369BERT (single model)
Adversarial RobustnessAdvGLUEAccuracy0.3029SMART_BERT (single model)

Related Papers

Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix Approach2025-07-14Vision Language Action Models in Robotic Manipulation: A Systematic Review2025-07-143DGAA: Realistic and Robust 3D Gaussian-based Adversarial Attack for Autonomous Driving2025-07-14VIP: Visual Information Protection through Adversarial Attacks on Vision-Language Models2025-07-11Identifying the Smallest Adversarial Load Perturbations that Render DC-OPF Infeasible2025-07-10ScoreAdv: Score-based Targeted Generation of Natural Adversarial Examples via Diffusion Models2025-07-08Tail-aware Adversarial Attacks: A Distributional Approach to Efficient LLM Jailbreaking2025-07-06Evaluating the Evaluators: Trust in Adversarial Robustness Tests2025-07-04