TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Combining Similarity Features and Deep Representation Lear...

Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News

Luís Borges, Bruno Martins, Pável Calado

2018-11-02Representation LearningNatural Language InferenceDocument ClassificationGeneral ClassificationStance Detection
PaperPDFCodeCodeCode(official)

Abstract

Fake news are nowadays an issue of pressing concern, given their recent rise as a potential threat to high-quality journalism and well-informed public discourse. The Fake News Challenge (FNC-1) was organized in 2017 to encourage the development of machine learning-based classification systems for stance detection (i.e., for identifying whether a particular news article agrees, disagrees, discusses, or is unrelated to a particular news headline), thus helping in the detection and analysis of possible instances of fake news. This article presents a new approach to tackle this stance detection problem, based on the combination of string similarity features with a deep neural architecture that leverages ideas previously advanced in the context of learning efficient text representations, document classification, and natural language inference. Specifically, we use bi-directional Recurrent Neural Networks, together with max-pooling over the temporal/sequential dimension and neural attention, for representing (i) the headline, (ii) the first two sentences of the news article, and (iii) the entire news article. These representations are then combined/compared, complemented with similarity features inspired on other FNC-1 approaches, and passed to a final layer that predicts the stance of the article towards the headline. We also explore the use of external sources of information, specifically large datasets of sentence pairs originally proposed for training and evaluating natural language inference methods, in order to pre-train specific components of the neural network architecture (e.g., the RNNs used for encoding sentences). The obtained results attest to the effectiveness of the proposed ideas and show that our model, particularly when considering pre-training and the combination of neural representations together with similarity features, slightly outperforms the previous state-of-the-art.

Results

TaskDatasetMetricValueModel
Natural Language InferenceSNLI% Test Accuracy84.8Stacked Bi-LSTMs (shortcut connections, max-pooling)
Natural Language InferenceSNLI% Test Accuracy84.5Bi-LSTM sentence encoder (max-pooling)
Natural Language InferenceSNLI% Test Accuracy84.4Stacked Bi-LSTMs (shortcut connections, max-pooling, attention)
Natural Language InferenceMultiNLIMatched71.4Stacked Bi-LSTMs (shortcut connections, max-pooling)
Natural Language InferenceMultiNLIMismatched72.2Stacked Bi-LSTMs (shortcut connections, max-pooling)
Natural Language InferenceMultiNLIMatched70.7Bi-LSTM sentence encoder (max-pooling)
Natural Language InferenceMultiNLIMismatched71.1Bi-LSTM sentence encoder (max-pooling)
Natural Language InferenceMultiNLIMatched70.7Stacked Bi-LSTMs (shortcut connections, max-pooling, attention)
Natural Language InferenceMultiNLIMismatched70.5Stacked Bi-LSTMs (shortcut connections, max-pooling, attention)
Fake News DetectionFNC-1Per-class Accuracy (Agree)51.34Bi-LSTM (max-pooling, attention)
Fake News DetectionFNC-1Per-class Accuracy (Disagree)10.33Bi-LSTM (max-pooling, attention)
Fake News DetectionFNC-1Per-class Accuracy (Discuss)81.52Bi-LSTM (max-pooling, attention)
Fake News DetectionFNC-1Per-class Accuracy (Unrelated)96.74Bi-LSTM (max-pooling, attention)
Fake News DetectionFNC-1Weighted Accuracy82.23Bi-LSTM (max-pooling, attention)

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16A Mixed-Primitive-based Gaussian Splatting Method for Surface Reconstruction2025-07-15LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15