TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/DeBERTa

DeBERTa

Natural Language ProcessingIntroduced 200090 papers
Source Paper

Description

DeBERTa is a Transformer-based neural language model that aims to improve the BERT and RoBERTa models with two techniques: a disentangled attention mechanism and an enhanced mask decoder. The disentangled attention mechanism is where each word is represented unchanged using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangle matrices on their contents and relative positions. The enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve model’s generalization on downstream tasks.

Papers Using This Method

AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles2025-07-15PlantBert: An Open Source Language Model for Plant Science2025-06-10WeightLoRA: Keep Only Necessary Adapters2025-06-03Decom-Renorm-Merge: Model Merging on the Right Space Improves Multitasking2025-05-29Transformer-Based Named Entity Recognition for Automated Server Provisioning2025-04-01Sarang at DEFACTIFY 4.0: Detecting AI-Generated Text Using Noised Data and an Ensemble of DeBERTa Models2025-02-24Code-Mixed Telugu-English Hate Speech Detection2025-02-15Zero-Shot Belief: A Hard Problem for LLMs2025-02-12Feature Alignment-Based Knowledge Distillation for Efficient Compression of Large Language Models2024-12-27SEKE: Specialised Experts for Keyword Extraction2024-12-18Lightweight Safety Classification Using Pruned Language Models2024-12-18RAGulator: Lightweight Out-of-Context Detectors for Grounded Text Generation2024-11-06Bonafide at LegalLens 2024 Shared Task: Using Lightweight DeBERTa Based Encoder For Legal Violation Detection and Resolution2024-10-30Evaluating Transformer Models for Suicide Risk Detection on Social Media2024-10-10Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective2024-10-06Multimodal Coherent Explanation Generation of Robot Failures2024-10-01Improving Academic Skills Assessment with NLP and Ensemble Learning2024-09-23Instruct-DeBERTa: A Hybrid Approach for Aspect-based Sentiment Analysis on Textual Reviews2024-08-23Scientific QA System with Verifiable Answers2024-07-16Turn-Level Empathy Prediction Using Psychological Indicators2024-07-11