TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving Open Information Extraction with Large Language ...

Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty

Chen Ling, Xujiang Zhao, Xuchao Zhang, Yanchi Liu, Wei Cheng, Haoyu Wang, Zhengzhang Chen, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao

2023-09-07Uncertainty QuantificationInstruction FollowingOpen Information Extraction
PaperPDF

Abstract

Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text, typically in the form of (subject, relation, object) triples. Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevant context from relevant relations and generate structured output due to the restrictions on fine-tuning the model. Second, LLMs generates responses autoregressively based on probability, which makes the predicted relations lack confidence. In this paper, we assess the capabilities of LLMs in improving the OIE task. Particularly, we propose various in-context learning strategies to enhance LLM's instruction-following ability and a demonstration uncertainty quantification module to enhance the confidence of the generated relations. Our experiments on three OIE benchmark datasets show that our approach holds its own against established supervised methods, both quantitatively and qualitatively.

Results

TaskDatasetMetricValueModel
Open Information ExtractionCaRBF152.1GPT-3.5-Turbo w/ Selected Demo & Uncertainty
Open Information ExtractionCaRBF151.5LLaMA-2-70B w/ Selected Demo & Uncertainty
Open Information ExtractionCaRBF136.2LLaMA-2-13B w/ Selected Demo & Uncertainty
Open Information ExtractionOIE2016F165.8LLaMA-2-70B w/ Selected Demo & Uncertainty
Open Information ExtractionOIE2016F165.1GPT-3.5-Turbo w/ Selected Demo & Uncertainty
Open Information ExtractionOIE2016F136.9LLaMA-2-13B w/ Selected Demo & Uncertainty

Related Papers

AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning2025-07-17Distributional Reinforcement Learning on Path-dependent Options2025-07-16Interpretable Bayesian Tensor Network Kernel Machines with Automatic Rank and Feature Selection2025-07-15Joint space-time wind field data extrapolation and uncertainty quantification using nonparametric Bayesian dictionary learning2025-07-15A Risk-Aware Adaptive Robust MPC with Learned Uncertainty Quantification2025-07-15How Many Instructions Can LLMs Follow at Once?2025-07-15DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil Engineering2025-07-15From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion2025-07-11