TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Dice Loss for Data-imbalanced NLP Tasks

Dice Loss for Data-imbalanced NLP Tasks

Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu, Jiwei Li

2019-11-07ACL 2020 6Reading ComprehensionQuestion AnsweringParaphrase IdentificationPart-Of-Speech Taggingnamed-entity-recognitionNamed Entity RecognitionChinese Named Entity RecognitionNamed Entity Recognition (NER)Machine Reading Comprehension
PaperPDFCode(official)CodeCodeCode

Abstract

Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training. The most commonly used cross entropy (CE) criteria is actually an accuracy-oriented objective, and thus creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function, while at test time F1 score concerns more about positive examples. In this paper, we propose to use dice loss in replacement of the standard cross-entropy objective for data-imbalanced NLP tasks. Dice loss is based on the Sorensen-Dice coefficient or Tversky index, which attaches similar importance to false positives and false negatives, and is more immune to the data-imbalance issue. To further alleviate the dominating influence from easy-negative examples in training, we propose to associate training examples with dynamically adjusted weights to deemphasize easy-negative examples.Theoretical analysis shows that this strategy narrows down the gap between the F1 score in evaluation and the dice loss in training. With the proposed training objective, we observe significant performance boost on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task; SOTA results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity recognition task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification.

Results

TaskDatasetMetricValueModel
Question AnsweringSQuAD1.1 devEM89.79XLNet+DSC
Question AnsweringSQuAD1.1 devF195.77XLNet+DSC
Question AnsweringSQuAD2.0 devEM87.65XLNet+DSC
Question AnsweringSQuAD2.0 devF189.51XLNet+DSC
Named Entity Recognition (NER)Ontonotes v5 (English)F192.07BERT-MRC+DSC
Named Entity Recognition (NER)CoNLL 2003 (English)F193.33BERT-MRC+DSC
Named Entity Recognition (NER)MSRAF196.72BERT-MRC+DSC
Named Entity Recognition (NER)OntoNotes 4F184.47BERT-MRC+DSC

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Warehouse Spatial Question Answering with LLM Agent2025-07-14Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09