Identifying attributions of causality in political text
- URL: http://arxiv.org/abs/2512.03214v1
- Date: Tue, 02 Dec 2025 20:37:07 GMT
- Title: Identifying attributions of causality in political text
- Authors: Paulina Garcia-Corral,
- Abstract summary: Explanations are a fundamental element of how people make sense of the political world.<n>I introduce a framework for detecting and parsing explanations in political text.<n>I demonstrate how causal explanations can be studied at scale.
- Score: 0.5076419064097734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explanations are a fundamental element of how people make sense of the political world. Citizens routinely ask and answer questions about why events happen, who is responsible, and what could or should be done differently. Yet despite their importance, explanations remain an underdeveloped object of systematic analysis in political science, and existing approaches are fragmented and often issue-specific. I introduce a framework for detecting and parsing explanations in political text. To do this, I train a lightweight causal language model that returns a structured data set of causal claims in the form of cause-effect pairs for downstream analysis. I demonstrate how causal explanations can be studied at scale, and show the method's modest annotation requirements, generalizability, and accuracy relative to human coding.
Related papers
- An Explainable Framework for Misinformation Identification via Critical Question Answering [2.4861619769660637]
We propose a new explainable framework for misinformation detection based on the theory of Argumentation Schemes and Critical Questions.<n>We release NLAS-CQ, the first corpus combining 3,566 textbook-like natural language argumentation scheme instances and 4,687 corresponding answers to critical questions related to these arguments.
arXiv Detail & Related papers (2025-03-18T18:24:23Z) - Few-shot Policy (de)composition in Conversational Question Answering [54.259440408606515]
We propose a neuro-symbolic framework to detect policy compliance using large language models (LLMs) in a few-shot setting.<n>We show that our approach soundly reasons about policy compliance conversations by extracting sub-questions to be answered, assigning truth values from contextual information, and explicitly producing a set of logic statements from the given policies.<n>We apply this approach to the popular PCD and conversational machine reading benchmark, ShARC, and show competitive performance with no task-specific finetuning.
arXiv Detail & Related papers (2025-01-20T08:40:15Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Embracing Background Knowledge in the Analysis of Actual Causality: An
Answer Set Programming Approach [6.685412769221564]
This paper presents a rich knowledge representation language aimed at formalizing causal knowledge.
A definition of cause is presented and used to analyze the actual causes of changes with respect to sequences of actions representing those examples.
arXiv Detail & Related papers (2023-06-06T17:21:21Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - e-CARE: a New Dataset for Exploring Explainable Causal Reasoning [28.412572027774573]
We present a human-annotated explainable CAusal REasoning dataset (e-CARE) with over 21K causal reasoning questions.
We show that generating valid explanations for causal facts still remains especially challenging for the state-of-the-art models.
arXiv Detail & Related papers (2022-05-12T02:41:48Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Everything Has a Cause: Leveraging Causal Inference in Legal Text
Analysis [62.44432226563088]
Causal inference is the process of capturing cause-effect relationship among variables.
We propose a novel Graph-based Causal Inference framework, which builds causal graphs from fact descriptions without much human involvement.
We observe that the causal knowledge contained in GCI can be effectively injected into powerful neural networks for better performance and interpretability.
arXiv Detail & Related papers (2021-04-19T16:13:10Z) - Dependency Decomposition and a Reject Option for Explainable Models [4.94950858749529]
Recent deep learning models perform extremely well in various inference tasks.
Recent advances offer methods to visualize features, describe attribution of the input.
We present the first analysis of dependencies regarding the probability distribution over the desired image classification outputs.
arXiv Detail & Related papers (2020-12-11T17:39:33Z) - Aligning Faithful Interpretations with their Social Attribution [58.13152510843004]
We find that the requirement of model interpretations to be faithful is vague and incomplete.
We identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution)
arXiv Detail & Related papers (2020-06-01T16:45:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.