DODO: Causal Structure Learning with Budgeted Interventions
- URL: http://arxiv.org/abs/2510.08207v1
- Date: Thu, 09 Oct 2025 13:32:33 GMT
- Title: DODO: Causal Structure Learning with Budgeted Interventions
- Authors: Matteo Gregorini, Chiara Boldrini, Lorenzo Valerio,
- Abstract summary: We introduce DODO, an algorithm defining how an Agent can autonomously learn the causal structure of its environment.<n>Results show better performance for DODO, compared to observational approaches, in all but the most limited resource conditions.
- Score: 1.0323063834827415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence has achieved remarkable advancements in recent years, yet much of its progress relies on identifying increasingly complex correlations. Enabling causality awareness in AI has the potential to enhance its performance by enabling a deeper understanding of the underlying mechanisms of the environment. In this paper, we introduce DODO, an algorithm defining how an Agent can autonomously learn the causal structure of its environment through repeated interventions. We assume a scenario where an Agent interacts with a world governed by a causal Directed Acyclic Graph (DAG), which dictates the system's dynamics but remains hidden from the Agent. The Agent's task is to accurately infer the causal DAG, even in the presence of noise. To achieve this, the Agent performs interventions, leveraging causal inference techniques to analyze the statistical significance of observed changes. Results show better performance for DODO, compared to observational approaches, in all but the most limited resource conditions. DODO is often able to reconstruct with as low as zero errors the structure of the causal graph. In the most challenging configuration, DODO outperforms the best baseline by +0.25 F1 points.
Related papers
- How AI Agents Follow the Herd of AI? Network Effects, History, and Machine Optimism [7.1683021355290295]
This study investigates how AI agents navigate network-effect games, where individual payoffs depend on peer participatio--a context underexplored in multi-agent systems.<n>We introduce a novel workflow design using large language model (LLM)-based agents in repeated decision-making scenarios.
arXiv Detail & Related papers (2025-12-12T12:14:48Z) - ARCADIA: Scalable Causal Discovery for Corporate Bankruptcy Analysis Using Agentic AI [0.0]
ARCADIA integrates large-language-model reasoning with statistical diagnostics to construct valid causal structures.<n>Unlike traditional algorithms, ARCADIA iteratively refines candidate DAGs through constraint-guided prompting and causal-validity feedback.
arXiv Detail & Related papers (2025-11-30T11:21:29Z) - Mutual Information Tracks Policy Coherence in Reinforcement Learning [0.0]
Reinforcement Learning (RL) agents face degradation from sensor faults, actuator wear, and environmental shifts.<n>We present an information-theoretic framework that reveals both the fundamental dynamics of RL and provides practical methods for diagnosing deployment-time anomalies.
arXiv Detail & Related papers (2025-09-12T17:24:20Z) - Graphs Meet AI Agents: Taxonomy, Progress, and Future Opportunities [117.49715661395294]
Data structurization can play a promising role by transforming intricate and disorganized data into well-structured forms.<n>This survey presents a first systematic review of how graphs can empower AI agents.
arXiv Detail & Related papers (2025-06-22T12:59:12Z) - Learning Time-Aware Causal Representation for Model Generalization in Evolving Domains [50.66049136093248]
We develop a time-aware structural causal model (SCM) that incorporates dynamic causal factors and the causal mechanism drifts.<n>We show that our method can yield the optimal causal predictor for each time domain.<n>Results on both synthetic and real-world datasets exhibit that SYNC can achieve superior temporal generalization performance.
arXiv Detail & Related papers (2025-06-21T14:05:37Z) - Variable-Agnostic Causal Exploration for Reinforcement Learning [56.52768265734155]
We introduce a novel framework, Variable-Agnostic Causal Exploration for Reinforcement Learning (VACERL)
Our approach automatically identifies crucial observation-action steps associated with key variables using attention mechanisms.
It constructs the causal graph connecting these steps, which guides the agent towards observation-action pairs with greater causal influence on task completion.
arXiv Detail & Related papers (2024-07-17T09:45:27Z) - Discovering and Reasoning of Causality in the Hidden World with Large Language Models [109.62442253177376]
We develop a new framework termed Causal representatiOn AssistanT (COAT) to propose useful measured variables for causal discovery.<n>Instead of directly inferring causality with Large language models (LLMs), COAT constructs feedback from intermediate causal discovery results to LLMs to refine the proposed variables.
arXiv Detail & Related papers (2024-02-06T12:18:54Z) - Realization of Causal Representation Learning to Adjust Confounding Bias
in Latent Space [28.133104562449212]
Causal DAGs(Directed Acyclic Graphs) are usually considered in a 2D plane.
In this paper, we redefine causal DAG as emphdo-DAG, in which variables' values are no longer time-stamp-dependent, and timelines can be seen as axes.
arXiv Detail & Related papers (2022-11-15T23:35:15Z) - A Meta-Reinforcement Learning Algorithm for Causal Discovery [3.4806267677524896]
Causal structures can enable models to go beyond pure correlation-based inference.
Finding causal structures from data poses a significant challenge both in computational effort and accuracy.
We develop a meta-reinforcement learning algorithm that performs causal discovery by learning to perform interventions.
arXiv Detail & Related papers (2022-07-18T09:26:07Z) - Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally
Inattentive Reinforcement Learning [85.86440477005523]
We study more human-like RL agents which incorporate an established model of human-irrationality, the Rational Inattention (RI) model.
RIRL models the cost of cognitive information processing using mutual information.
We show that using RIRL yields a rich spectrum of new equilibrium behaviors that differ from those found under rational assumptions.
arXiv Detail & Related papers (2022-01-18T20:54:00Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.