Fake News Detection as Natural Language Inference
Kai-Chou Yang, Timothy Niven, Hung-Yu Kao
Abstract
This report describes the entry by the Intelligent Knowledge Management (IKM) Lab in the WSDM 2019 Fake News Classification challenge. We treat the task as natural language inference (NLI). We individually train a number of the strongest NLI models as well as BERT. We ensemble these results and retrain with noisy labels in two stages. We analyze transitivity relations in the train and test sets and determine a set of test cases that can be reliably classified on this basis. The remainder of test cases are classified by our ensemble. Our entry achieves test set accuracy of 88.063% for 3rd place in the competition.
Results
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Text Generation | COCO Captions | BLEU-2 | 2 | tecpic |
Related Papers
Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15DCR: Quantifying Data Contamination in LLMs Evaluation2025-07-15KEN: Knowledge Augmentation and Emotion Guidance Network for Multimodal Fake News Detection2025-07-13Unpatchable Vulnerabilities in Windows 10/11: Security Report 20252025-07-10DT4PCP: A Digital Twin Framework for Personalized Care Planning Applied to Type 2 Diabetes Management2025-07-10