TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/When Does Pretraining Help? Assessing Self-Supervised Lear...

When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset

Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, Daniel E. Ho

2021-04-18Text ClassificationQuestion AnsweringSelf-Supervised LearningSpecificityMultiple-choice
PaperPDFCode(official)Code

Abstract

While self-supervised learning has made rapid advances in natural language processing, it remains unclear when researchers should engage in resource-intensive domain-specific pretraining (domain pretraining). The law, puzzlingly, has yielded few documented instances of substantial gains to domain pretraining in spite of the fact that legal language is widely seen to be unique. We hypothesize that these existing results stem from the fact that existing legal NLP tasks are too easy and fail to meet conditions for when domain pretraining can help. To address this, we first present CaseHOLD (Case Holdings On Legal Decisions), a new dataset comprised of over 53,000+ multiple choice questions to identify the relevant holding of a cited case. This dataset presents a fundamental task to lawyers and is both legally meaningful and difficult from an NLP perspective (F1 of 0.4 with a BiLSTM baseline). Second, we assess performance gains on CaseHOLD and existing legal NLP datasets. While a Transformer architecture (BERT) pretrained on a general corpus (Google Books and Wikipedia) improves performance, domain pretraining (using corpus of approximately 3.5M decisions across all courts in the U.S. that is larger than BERT's) with a custom legal vocabulary exhibits the most substantial performance gains with CaseHOLD (gain of 7.2% on F1, representing a 12% improvement on BERT) and consistent performance gains across two other legal tasks. Third, we show that domain pretraining may be warranted when the task exhibits sufficient similarity to the pretraining corpus: the level of performance increase in three legal tasks was directly tied to the domain specificity of the task. Our findings inform when researchers should engage resource-intensive pretraining and show that Transformer-based architectures, too, learn embeddings suggestive of distinct legal language.

Results

TaskDatasetMetricValueModel
Question AnsweringCaseHOLDMacro F1 (10-fold)69.5Custom Legal-BERT
Question AnsweringCaseHOLDMacro F1 (10-fold)68Legal-BERT
Question AnsweringCaseHOLDMacro F1 (10-fold)61.3BERT
Text ClassificationTerms of ServiceF1(10-fold)78.7Custom Legal-BERT
Text ClassificationTerms of ServiceF1(10-fold)75Legal-BERT
Text ClassificationTerms of ServiceF1(10-fold)72.2BERT
Text ClassificationOverrulingF1(10-fold)97.4Custom Legal-BERT
Text ClassificationOverrulingF1(10-fold)96.3Legal-BERT
Text ClassificationOverrulingF1(10-fold)95.8BERT
ClassificationTerms of ServiceF1(10-fold)78.7Custom Legal-BERT
ClassificationTerms of ServiceF1(10-fold)75Legal-BERT
ClassificationTerms of ServiceF1(10-fold)72.2BERT
ClassificationOverrulingF1(10-fold)97.4Custom Legal-BERT
ClassificationOverrulingF1(10-fold)96.3Legal-BERT
ClassificationOverrulingF1(10-fold)95.8BERT

Related Papers

Making Language Model a Hierarchical Classifier and Generator2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17HATS: Hindi Analogy Test Set for Evaluating Reasoning in Large Language Models2025-07-17