TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Speaker attribution in German parliamentary debates with Q...

Speaker attribution in German parliamentary debates with QLoRA-adapted large language models

Tobias Bornheim, Niklas Grieger, Patrick Gustav Blaneck, Stephan Bialonski

2023-09-18Speaker Attribution in German Parliamentary Debates (GermEval 2023, subtask 1)Large Language ModelSemantic Role LabelingLanguage ModellingSpeaker Attribution in German Parliamentary Debates (GermEval 2023, subtask 2)
PaperPDFCode(official)Code(official)

Abstract

The growing body of political texts opens up new opportunities for rich insights into political dynamics and ideologies but also increases the workload for manual analysis. Automated speaker attribution, which detects who said what to whom in a speech event and is closely related to semantic role labeling, is an important processing step for computational text analysis. We study the potential of the large language model family Llama 2 to automate speaker attribution in German parliamentary debates from 2017-2021. We fine-tune Llama 2 with QLoRA, an efficient training strategy, and observe our approach to achieve competitive performance in the GermEval 2023 Shared Task On Speaker Attribution in German News Articles and Parliamentary Debates. Our results shed light on the capabilities of large language models in automating speaker attribution, revealing a promising avenue for computational analysis of political discourse and the development of semantic role labeling systems.

Results

TaskDatasetMetricValueModel
Speaker Attribution in German Parliamentary Debates (GermEval 2023, subtask 1)GePaDeF10.813Llama 2 70 B QLoRa adapted
Speaker Attribution in German Parliamentary Debates (GermEval 2023, subtask 2)GePaDeF10.891Llama 2 70 B QLoRa adapted

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21DENSE: Longitudinal Progress Note Generation with Temporal Modeling of Heterogeneous Clinical Notes Across Hospital Visits2025-07-18GeoReg: Weight-Constrained Few-Shot Regression for Socio-Economic Estimation using LLM2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17