TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Neural Attentive Bag-of-Entities Model for Text Classifica...

Neural Attentive Bag-of-Entities Model for Text Classification

Ikuya Yamada, Hiroyuki Shindo

2019-09-03CONLL 2019 11Text ClassificationQuestion AnsweringGeneral ClassificationClassification
PaperPDFCode(official)CodeCode

Abstract

This study proposes a Neural Attentive Bag-of-Entities model, which is a neural network model that performs text classification using entities in a knowledge base. Entities provide unambiguous and relevant semantic signals that are beneficial for capturing semantics in texts. We combine simple high-recall entity detection based on a dictionary, to detect entities in a document, with a novel neural attention mechanism that enables the model to focus on a small number of unambiguous and relevant entities. We tested the effectiveness of our model using two standard text classification datasets (i.e., the 20 Newsgroups and R8 datasets) and a popular factoid question answering dataset based on a trivia quiz game. As a result, our model achieved state-of-the-art results on all datasets. The source code of the proposed model is available online at https://github.com/wikipedia2vec/wikipedia2vec.

Results

TaskDatasetMetricValueModel
Text ClassificationR8Accuracy97.1NABoE-full
Text ClassificationR8F-measure91.7NABoE-full
Text Classification20NEWSAccuracy86.8NABoE-full
Text Classification20NEWSF-measure86.2NABoE-full
ClassificationR8Accuracy97.1NABoE-full
ClassificationR8F-measure91.7NABoE-full
Classification20NEWSAccuracy86.8NABoE-full
Classification20NEWSF-measure86.2NABoE-full

Related Papers

Making Language Model a Hierarchical Classifier and Generator2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16