Replication Data for: Can Large Language Models (or Humans) Disentangle Text?
Click to add a brief description of the dataset (Markdown and LaTeX enabled).
Provide:# Can Large Language Models (or Humans) Disentangle Text?
Abstract
We investigate the potential of large language models (LLMs) to disentangle text variables—to remove the textual traces of an undesired forbidden variable in a task sometimes known as text distillation and closely related to the fairness in AI and causal inference literature. We employ a range of various LLM approaches in an attempt to disentangle text by identifying and removing information about a target variable while preserving other relevant signals. We show that in the strong test of removing sentiment, the statistical association between the processed text and sentiment is still detectable to machine learning classifiers post-LLM-disentanglement. Furthermore, we find that human annotators also struggle to disentangle sentiment while preserving other semantic content. This suggests there may be limited separability between concept variables in some text contexts, highlighting limitations of methods relying on text-level transformations and also raising questions about the robustness of disentanglement methods that achieve statistical independence in representation space if this is difficult for human coders operating on raw text to attain.
Repository Details
This repository contains data from human-coded and processed reviews from the main paper results.
Paper Link
Nicolas Audinet de Pieuchon, Adel Daoud, Connor T. Jerzak, Moa Johansson, Richard Johansson. Can Large Language Models (or Humans) Disentangle Text?. Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+ CSS 2024): 57-67, 2024. [PDF]
- a high-level explanation of the dataset characteristics
- explain motivations and summary of its content
- potential use cases of the dataset