Do Large Language Models Show Biases in Causal Learning? Insights from Contingency Judgment
- URL: http://arxiv.org/abs/2510.13985v1
- Date: Wed, 15 Oct 2025 18:09:00 GMT
- Title: Do Large Language Models Show Biases in Causal Learning? Insights from Contingency Judgment
- Authors: María Victoria Carro, Denise Alejandra Mester, Francisca Gauna Selasco, Giovanni Franco Gabriel Marraffini, Mario Alejandro Leiva, Gerardo I. Simari, María Vanina Martinez,
- Abstract summary: Causal learning is the cognitive process of developing the capability of making causal inferences.<n>This process is prone to errors and biases, such as the illusion of causality.<n>This cognitive bias has been proposed to underlie many societal problems.
- Score: 0.1547863211792184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal learning is the cognitive process of developing the capability of making causal inferences based on available information, often guided by normative principles. This process is prone to errors and biases, such as the illusion of causality, in which people perceive a causal relationship between two variables despite lacking supporting evidence. This cognitive bias has been proposed to underlie many societal problems, including social prejudice, stereotype formation, misinformation, and superstitious thinking. In this work, we examine whether large language models are prone to developing causal illusions when faced with a classic cognitive science paradigm: the contingency judgment task. To investigate this, we constructed a dataset of 1,000 null contingency scenarios (in which the available information is not sufficient to establish a causal relationship between variables) within medical contexts and prompted LLMs to evaluate the effectiveness of potential causes. Our findings show that all evaluated models systematically inferred unwarranted causal relationships, revealing a strong susceptibility to the illusion of causality. While there is ongoing debate about whether LLMs genuinely understand causality or merely reproduce causal language without true comprehension, our findings support the latter hypothesis and raise concerns about the use of language models in domains where accurate causal reasoning is essential for informed decision-making.
Related papers
- CausalFlip: A Benchmark for LLM Causal Judgment Beyond Semantic Matching [50.65932158912512]
We propose a new causal reasoning benchmark, CausalFlip, to encourage the development of new large language models.<n>CaulFlip consists of causal judgment questions built over event triples that could form different confounder, chain, and collider relations.<n>We evaluate LLMs under multiple training paradigms, including answer-only training, explicit Chain-of-Thought supervision, and a proposed internalized causal reasoning approach.
arXiv Detail & Related papers (2026-02-23T18:06:15Z) - Do Large Language Models Show Biases in Causal Learning? [3.0264418764647605]
Causal learning is the cognitive process of developing the capability of making causal inferences based on available information.<n>This research investigates whether large language models (LLMs) develop causal illusions.
arXiv Detail & Related papers (2024-12-13T19:03:48Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - Context-Aware Reasoning On Parametric Knowledge for Inferring Causal Variables [49.31233968546582]
We introduce a novel benchmark where the objective is to complete a partial causal graph.<n>We show the strong ability of LLMs to hypothesize the backdoor variables between a cause and its effect.<n>Unlike simple memorization of fixed associations, our task requires the LLM to reason according to the context of the entire graph.
arXiv Detail & Related papers (2024-09-04T10:37:44Z) - CELLO: Causal Evaluation of Large Vision-Language Models [9.928321287432365]
Causal reasoning is fundamental to human intelligence and crucial for effective decision-making in real-world environments.
We introduce a fine-grained and unified definition of causality involving interactions between humans and objects.
We construct a novel dataset, CELLO, consisting of 14,094 causal questions across all four levels of causality.
arXiv Detail & Related papers (2024-06-27T12:34:52Z) - Cause and Effect: Can Large Language Models Truly Understand Causality? [1.2334534968968969]
This research proposes a novel architecture called Context Aware Reasoning Enhancement with Counterfactual Analysis(CARE CA) framework.
The proposed framework incorporates an explicit causal detection module with ConceptNet and counterfactual statements, as well as implicit causal detection through Large Language Models.
The knowledge from ConceptNet enhances the performance of multiple causal reasoning tasks such as causal discovery, causal identification and counterfactual reasoning.
arXiv Detail & Related papers (2024-02-28T08:02:14Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs [55.66353783572259]
Causal-Consistency Chain-of-Thought harnesses multi-agent collaboration to bolster the faithfulness and causality of foundation models.<n>Our framework demonstrates significant superiority over state-of-the-art methods through extensive and comprehensive evaluations.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - Causal Deep Learning [77.49632479298745]
Causality has the potential to transform the way we solve real-world problems.
But causality often requires crucial assumptions which cannot be tested in practice.
We propose a new way of thinking about causality -- we call this causal deep learning.
arXiv Detail & Related papers (2023-03-03T19:19:18Z) - iReason: Multimodal Commonsense Reasoning using Videos and Natural
Language with Interpretability [0.0]
Causality knowledge is vital to building robust AI systems.
We propose iReason, a framework that infers visual-semantic commonsense knowledge using both videos and natural language captions.
arXiv Detail & Related papers (2021-06-25T02:56:34Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.