TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Leveraging Monolingual Data for Crosslingual Compositional...

Leveraging Monolingual Data for Crosslingual Compositional Word Representations

Hubert Soyer, Pontus Stenetorp, Akiko Aizawa

2014-12-19Document Classification
PaperPDFCode(official)

Abstract

In this work, we present a novel neural network based architecture for inducing compositional crosslingual word representations. Unlike previously proposed methods, our method fulfills the following three criteria; it constrains the word-level representations to be compositional, it is capable of leveraging both bilingual and monolingual data, and it is scalable to large vocabularies and large quantities of data. The key component of our approach is what we refer to as a monolingual inclusion criterion, that exploits the observation that phrases are more closely semantically related to their sub-phrases than to other randomly sampled phrases. We evaluate our method on a well-established crosslingual document classification task and achieve results that are either comparable, or greatly improve upon previous state-of-the-art methods. Concretely, our method reaches a level of 92.7% and 84.4% accuracy for the English to German and German to English sub-tasks respectively. The former advances the state of the art by 0.9% points of accuracy, the latter is an absolute improvement upon the previous state of the art by 7.7% points of accuracy and an improvement of 33.0% in error reduction.

Results

TaskDatasetMetricValueModel
Cross-LingualReuters RCV1/RCV2 English-to-GermanAccuracy92.7Biinclusion (Euro500kReuters)
Cross-LingualReuters RCV1/RCV2 German-to-EnglishAccuracy84.4Biinclusion (Euro500kReuters)
Cross-Lingual Document ClassificationReuters RCV1/RCV2 English-to-GermanAccuracy92.7Biinclusion (Euro500kReuters)
Cross-Lingual Document ClassificationReuters RCV1/RCV2 German-to-EnglishAccuracy84.4Biinclusion (Euro500kReuters)

Related Papers

Can Reasoning LLMs Enhance Clinical Document Classification?2025-04-10Text Chunking for Document Classification for Urban System Management using Large Language Models2025-03-31Evaluating Negative Sampling Approaches for Neural Topic Models2025-03-23Converting Transformers into DGNNs Form2025-02-01Cross-Entropy Attacks to Language Models via Rare Event Simulation2025-01-21On Importance of Layer Pruning for Smaller BERT Models and Low Resource Languages2025-01-01Data-Driven Self-Supervised Graph Representation Learning2024-12-24Zero-Shot Prompting and Few-Shot Fine-Tuning: Revisiting Document Image Classification Using Large Language Models2024-12-18