TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/XF2T: Cross-lingual Fact-to-Text Generation for Low-Resour...

XF2T: Cross-lingual Fact-to-Text Generation for Low-Resource Languages

Shivprasad Sagare, Tushar Abhishek, Bhavyajeet Singh, Anubhav Sharma, Manish Gupta, Vasudeva Varma

2022-09-22Question AnsweringData-to-Text GenerationText GenerationDescriptive
PaperPDF

Abstract

Multiple business scenarios require an automated generation of descriptive human-readable text from structured input data. Hence, fact-to-text generation systems have been developed for various downstream tasks like generating soccer reports, weather and financial reports, medical reports, person biographies, etc. Unfortunately, previous work on fact-to-text (F2T) generation has focused primarily on English mainly due to the high availability of relevant datasets. Only recently, the problem of cross-lingual fact-to-text (XF2T) was proposed for generation across multiple languages alongwith a dataset, XALIGN for eight languages. However, there has been no rigorous work on the actual XF2T generation problem. We extend XALIGN dataset with annotated data for four more languages: Punjabi, Malayalam, Assamese and Oriya. We conduct an extensive study using popular Transformer-based text generation models on our extended multi-lingual dataset, which we call XALIGNV2. Further, we investigate the performance of different text generation strategies: multiple variations of pretraining, fact-aware embeddings and structure-aware input encoding. Our extensive experiments show that a multi-lingual mT5 model which uses fact-aware embeddings with structure-aware input encoding leads to best results on average across the twelve languages. We make our code, dataset and model publicly available, and hope that this will help advance further research in this critical area.

Results

TaskDatasetMetricValueModel
Text GenerationXAlignBLEU429.27Fact-aware embedding with mT5
Text GenerationXAlignMETEOR53.64Fact-aware embedding with mT5
Text GenerationXAlignBLEU425.88Bi-lingual mT5
Text GenerationXAlignMETEOR50.91Bi-lingual mT5
Text GenerationXAlignBLEU418.91Translate-Output mT5
Text GenerationXAlignMETEOR42.83Translate-Output mT5
Data-to-Text GenerationXAlignBLEU429.27Fact-aware embedding with mT5
Data-to-Text GenerationXAlignMETEOR53.64Fact-aware embedding with mT5
Data-to-Text GenerationXAlignBLEU425.88Bi-lingual mT5
Data-to-Text GenerationXAlignMETEOR50.91Bi-lingual mT5
Data-to-Text GenerationXAlignBLEU418.91Translate-Output mT5
Data-to-Text GenerationXAlignMETEOR42.83Translate-Output mT5

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17DiffRhythm+: Controllable and Flexible Full-Length Song Generation with Preference Optimization2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16