Instruction-Tuning LLMs for Event Extraction with Annotation Guidelines
Saurabh Srivastava, Sweta Pati, Ziyu Yao
2025-02-22Event Extraction
Abstract
In this work, we study the effect of annotation guidelines -- textual descriptions of event types and arguments, when instruction-tuning large language models for event extraction. We conducted a series of experiments with both human-provided and machine-generated guidelines in both full- and low-data settings. Our results demonstrate the promise of annotation guidelines when there is a decent amount of training data and highlight its effectiveness in improving cross-schema generalization and low-frequency event-type performance.
Related Papers
LEMONADE: A Large Multilingual Expert-Annotated Abstractive Event Dataset for the Real World2025-06-01Beyond Pixels: Leveraging the Language of Soccer to Improve Spatio-Temporal Action Detection in Broadcast Videos2025-05-14Adaptive Schema-aware Event Extraction with Retrieval-Augmented Generation2025-05-13Retrieval-Enhanced Few-Shot Prompting for Speech Event Extraction2025-04-30Revisiting Prompt Optimization with Large Reasoning Models-A Case Study on Event Extraction2025-04-10Facial Dynamics in Video: Instruction Tuning for Improved Facial Expression Perception and Contextual Awareness2025-01-14Political Events using RAG with LLMs2025-01-06Attending To Syntactic Information In Biomedical Event Extraction Via Graph Neural Networks2025-01-02