Automated Discovery of Functional Actual Causes in Complex Environments
- URL: http://arxiv.org/abs/2404.10883v1
- Date: Tue, 16 Apr 2024 20:04:29 GMT
- Title: Automated Discovery of Functional Actual Causes in Complex Environments
- Authors: Caleb Chuck, Sankaran Vaidyanathan, Stephen Giguere, Amy Zhang, David Jensen, Scott Niekum,
- Abstract summary: Reinforcement learning algorithms often struggle to learn policies that generalize to novel situations.
This paper introduces functional actual cause (FAC), a framework that uses context-specific independencies in the environment to restrict the set of actual causes.
We additionally introduce Joint Optimization for Actual Cause Inference (JACI), an algorithm that learns from observational data to infer functional actual causes.
- Score: 18.58053096534201
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning (RL) algorithms often struggle to learn policies that generalize to novel situations due to issues such as causal confusion, overfitting to irrelevant factors, and failure to isolate control of state factors. These issues stem from a common source: a failure to accurately identify and exploit state-specific causal relationships in the environment. While some prior works in RL aim to identify these relationships explicitly, they rely on informal domain-specific heuristics such as spatial and temporal proximity. Actual causality offers a principled and general framework for determining the causes of particular events. However, existing definitions of actual cause often attribute causality to a large number of events, even if many of them rarely influence the outcome. Prior work on actual causality proposes normality as a solution to this problem, but its existing implementations are challenging to scale to complex and continuous-valued RL environments. This paper introduces functional actual cause (FAC), a framework that uses context-specific independencies in the environment to restrict the set of actual causes. We additionally introduce Joint Optimization for Actual Cause Inference (JACI), an algorithm that learns from observational data to infer functional actual causes. We demonstrate empirically that FAC agrees with known results on a suite of examples from the actual causality literature, and JACI identifies actual causes with significantly higher accuracy than existing heuristic methods in a set of complex, continuous-valued environments.
Related papers
- New Rules for Causal Identification with Background Knowledge [59.733125324672656]
We propose two novel rules for incorporating BK, which offer a new perspective to the open problem.
We show that these rules are applicable in some typical causality tasks, such as determining the set of possible causal effects with observational data.
arXiv Detail & Related papers (2024-07-21T20:21:21Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Causal Representation Learning Made Identifiable by Grouping of Observational Variables [8.157856010838382]
Causal Representation Learning aims to learn a causal model for hidden features in a data-driven manner.
Here, we show identifiability based on novel, weak constraints.
We also propose a novel self-supervised estimation framework consistent with the model.
arXiv Detail & Related papers (2023-10-24T10:38:02Z) - A Survey on Causal Discovery: Theory and Practice [2.741266294612776]
Causal inference is designed to quantify the underlying relationships that connect a cause to its effect.
In this paper, we explore recent advancements in a unified manner, provide a consistent overview of existing algorithms, report useful tools and data, present real-world applications.
arXiv Detail & Related papers (2023-05-17T08:18:56Z) - Causality-Based Multivariate Time Series Anomaly Detection [63.799474860969156]
We formulate the anomaly detection problem from a causal perspective and view anomalies as instances that do not follow the regular causal mechanism to generate the multivariate data.
We then propose a causality-based anomaly detection approach, which first learns the causal structure from data and then infers whether an instance is an anomaly relative to the local causal mechanism.
We evaluate our approach with both simulated and public datasets as well as a case study on real-world AIOps applications.
arXiv Detail & Related papers (2022-06-30T06:00:13Z) - Differentiable Invariant Causal Discovery [106.87950048845308]
Learning causal structure from observational data is a fundamental challenge in machine learning.
This paper proposes Differentiable Invariant Causal Discovery (DICD) to avoid learning spurious edges and wrong causal directions.
Extensive experiments on synthetic and real-world datasets verify that DICD outperforms state-of-the-art causal discovery methods up to 36% in SHD.
arXiv Detail & Related papers (2022-05-31T09:29:07Z) - Causal Inference Principles for Reasoning about Commonsense Causality [93.19149325083968]
Commonsense causality reasoning aims at identifying plausible causes and effects in natural language descriptions that are deemed reasonable by an average person.
Existing work usually relies on deep language models wholeheartedly, and is potentially susceptible to confounding co-occurrences.
Motivated by classical causal principles, we articulate the central question of CCR and draw parallels between human subjects in observational studies and natural languages.
We propose a novel framework, ROCK, to Reason O(A)bout Commonsense K(C)ausality, which utilizes temporal signals as incidental supervision.
arXiv Detail & Related papers (2022-01-31T06:12:39Z) - Causal Relational Learning [29.082088734252213]
We propose a declarative language called CaRL for capturing causal background knowledge and assumptions.
CaRL provides a foundation for inferring causality and reasoning about the effect of complex interventions in relational domains.
arXiv Detail & Related papers (2020-04-07T18:33:05Z) - Invariant Causal Prediction for Block MDPs [106.63346115341862]
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.
We propose a method of invariant prediction to learn model-irrelevance state abstractions (MISA) that generalize to novel observations in the multi-environment setting.
arXiv Detail & Related papers (2020-03-12T21:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.