TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ReGen: Reinforcement Learning for Text and Knowledge Base ...

ReGen: Reinforcement Learning for Text and Knowledge Base Generation using Pretrained Language Models

Pierre L. Dognin, Inkit Padhi, Igor Melnyk, Payel Das

2021-08-27EMNLP 2021 11Text GenerationReinforcement LearningJoint Entity and Relation ExtractionGraph Generationreinforcement-learning
PaperPDFCode(official)

Abstract

Automatic construction of relevant Knowledge Bases (KBs) from text, and generation of semantically meaningful text from KBs are both long-standing goals in Machine Learning. In this paper, we present ReGen, a bidirectional generation of text and graph leveraging Reinforcement Learning (RL) to improve performance. Graph linearization enables us to re-frame both tasks as a sequence to sequence generation problem regardless of the generative direction, which in turn allows the use of Reinforcement Learning for sequence training where the model itself is employed as its own critic leading to Self-Critical Sequence Training (SCST). We present an extensive investigation demonstrating that the use of RL via SCST benefits graph and text generation on WebNLG+ 2020 and TekGen datasets. Our system provides state-of-the-art results on WebNLG+ 2020 by significantly improving upon published results from the WebNLG 2020+ Challenge for both text-to-graph and graph-to-text generation tasks.

Results

TaskDatasetMetricValueModel
Relation ExtractionWebNLG 3.0F172.3ReGen (Ours) T2G.CE
Relation ExtractionWebNLG 3.0F172ReGen (Ours) T2G.RL
Relation ExtractionWebNLG 3.0F168.9Amazon AI (Shanghai) (guo-etal-2020-2)
Relation ExtractionWebNLG 3.0F168.2bt5 (agarwal-etal-2020-machine)
Relation ExtractionTekGenF162.3ReGen-SCST
Relation ExtractionTekGenF161.9ReGen-CE
Information ExtractionWebNLG 3.0F172.3ReGen (Ours) T2G.CE
Information ExtractionWebNLG 3.0F172ReGen (Ours) T2G.RL
Information ExtractionWebNLG 3.0F168.9Amazon AI (Shanghai) (guo-etal-2020-2)
Information ExtractionWebNLG 3.0F168.2bt5 (agarwal-etal-2020-machine)
Information ExtractionTekGenF162.3ReGen-SCST
Information ExtractionTekGenF161.9ReGen-CE

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17