Learning Nonlinear Causal Reductions to Explain Reinforcement Learning Policies
- URL: http://arxiv.org/abs/2507.14901v1
- Date: Sun, 20 Jul 2025 10:25:24 GMT
- Title: Learning Nonlinear Causal Reductions to Explain Reinforcement Learning Policies
- Authors: Armin Kekić, Jan Schneider, Dieter Büchler, Bernhard Schölkopf, Michel Besserve,
- Abstract summary: We take a causal perspective on explaining the behavior of reinforcement learning policies.<n>We learn a simplified high-level causal model that explains these relationships.<n>We prove that for a class of nonlinear causal models, there exists a unique solution.
- Score: 50.30741668990102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Why do reinforcement learning (RL) policies fail or succeed? This is a challenging question due to the complex, high-dimensional nature of agent-environment interactions. In this work, we take a causal perspective on explaining the behavior of RL policies by viewing the states, actions, and rewards as variables in a low-level causal model. We introduce random perturbations to policy actions during execution and observe their effects on the cumulative reward, learning a simplified high-level causal model that explains these relationships. To this end, we develop a nonlinear Causal Model Reduction framework that ensures approximate interventional consistency, meaning the simplified high-level model responds to interventions in a similar way as the original complex system. We prove that for a class of nonlinear causal models, there exists a unique solution that achieves exact interventional consistency, ensuring learned explanations reflect meaningful causal patterns. Experiments on both synthetic causal models and practical RL tasks-including pendulum control and robot table tennis-demonstrate that our approach can uncover important behavioral patterns, biases, and failure modes in trained RL policies.
Related papers
- Reframing attention as a reinforcement learning problem for causal discovery [3.2498796510544636]
We introduce Causal Process framework as a novel theory for representing dynamic hypotheses about causal structure.<n>This allows us to reformulate the attention mechanism popularized by Transformer networks within an RL setting.
arXiv Detail & Related papers (2025-07-18T13:50:57Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - Fine-Grained Causal Dynamics Learning with Quantization for Improving Robustness in Reinforcement Learning [26.34622544479565]
Causal dynamics learning is a promising approach to enhancing robustness in reinforcement learning.
We propose a novel model that infers fine-grained causal structures and employs them for prediction.
arXiv Detail & Related papers (2024-06-05T13:13:58Z) - Learning by Doing: An Online Causal Reinforcement Learning Framework with Causal-Aware Policy [38.86867078596718]
We consider explicitly modeling the generation process of states with the graphical causal model.<n>We formulate the causal structure updating into the RL interaction process with active intervention learning of the environment.
arXiv Detail & Related papers (2024-02-07T14:09:34Z) - Targeted Reduction of Causal Models [55.11778726095353]
Causal Representation Learning offers a promising avenue to uncover interpretable causal patterns in simulations.
We introduce Targeted Causal Reduction (TCR), a method for condensing complex intervenable models into a concise set of causal factors.
Its ability to generate interpretable high-level explanations from complex models is demonstrated on toy and mechanical systems.
arXiv Detail & Related papers (2023-11-30T15:46:22Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Explainable Reinforcement Learning via a Causal World Model [5.4934134592053185]
We learn a causal world model without prior knowledge of the causal structure of the environment.
The model captures the influence of actions, allowing us to interpret the long-term effects of actions through causal chains.
Our model remains accurate while improving explainability, making it applicable in model-based learning.
arXiv Detail & Related papers (2023-05-04T11:38:25Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.