TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FILTER: An Enhanced Fusion Method for Cross-lingual Langua...

FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding

Yuwei Fang, Shuohang Wang, Zhe Gan, Siqi Sun, Jingjing Liu

2020-09-10Question AnsweringRepresentation LearningPOSCross-Lingual TransferTranslationNERPOS TaggingZero-Shot Cross-Lingual Transfer
PaperPDFCode

Abstract

Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic cross-lingual alignment between different languages that proves essential for multilingual tasks. In this paper, we propose FILTER, an enhanced fusion method that takes cross-lingual data as input for XLM finetuning. Specifically, FILTER first encodes text input in the source language and its translation in the target language independently in the shallow layers, then performs cross-language fusion to extract multilingual knowledge in the intermediate layers, and finally performs further language-specific encoding. During inference, the model makes predictions based on the text input in the target language and its translation in the source language. For simple tasks such as classification, translated text in the target language shares the same label as the source language. However, this shared label becomes less accurate or even unavailable for more complex tasks such as question answering, NER and POS tagging. To tackle this issue, we further propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language. Extensive experiments demonstrate that FILTER achieves new state of the art on two challenging multilingual multi-task benchmarks, XTREME and XGLUE.

Results

TaskDatasetMetricValueModel
Cross-LingualXTREMEAvg77FILTER
Cross-LingualXTREMEQuestion Answering68.5FILTER
Cross-LingualXTREMESentence Retrieval84.4FILTER
Cross-LingualXTREMESentence-pair Classification87.5FILTER
Cross-LingualXTREMEStructured Prediction71.9FILTER
Cross-Lingual TransferXTREMEAvg77FILTER
Cross-Lingual TransferXTREMEQuestion Answering68.5FILTER
Cross-Lingual TransferXTREMESentence Retrieval84.4FILTER
Cross-Lingual TransferXTREMESentence-pair Classification87.5FILTER
Cross-Lingual TransferXTREMEStructured Prediction71.9FILTER

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Enhancing Cross-task Transfer of Large Language Models via Activation Steering2025-07-17