Active learning of causal probability trees
- URL: http://arxiv.org/abs/2205.08178v1
- Date: Tue, 17 May 2022 08:56:34 GMT
- Title: Active learning of causal probability trees
- Authors: Tue Herlau
- Abstract summary: We present a method for learning probability trees from a combination of interventional and observational data.
The method quantifies the expected information gain from an intervention, and selects the interventions with the largest gain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The past two decades have seen a growing interest in combining causal
information, commonly represented using causal graphs, with machine learning
models. Probability trees provide a simple yet powerful alternative
representation of causal information. They enable both computation of
intervention and counterfactuals, and are strictly more general, since they
allow context-dependent causal dependencies. Here we present a Bayesian method
for learning probability trees from a combination of interventional and
observational data. The method quantifies the expected information gain from an
intervention, and selects the interventions with the largest gain. We
demonstrate the efficiency of the method on simulated and real data. An
effective method for learning probability trees on a limited interventional
budget will greatly expand their applicability.
Related papers
- Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Distilling interpretable causal trees from causal forests [0.0]
A high-dimensional distribution of conditional average treatment effects may give accurate, individual-level estimates.
This paper proposes the Distilled Causal Tree, a method for distilling a single, interpretable causal tree from a causal forest.
arXiv Detail & Related papers (2024-08-02T05:48:15Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - A Meta-Reinforcement Learning Algorithm for Causal Discovery [3.4806267677524896]
Causal structures can enable models to go beyond pure correlation-based inference.
Finding causal structures from data poses a significant challenge both in computational effort and accuracy.
We develop a meta-reinforcement learning algorithm that performs causal discovery by learning to perform interventions.
arXiv Detail & Related papers (2022-07-18T09:26:07Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Probability trees and the value of a single intervention [0.0]
We quantify the information gain from a single intervention and show that both the anticipated information gain, prior to making an intervention, and the expected gain from an intervention have simple expressions.
This results in an active-learning method that simply selects the intervention with the highest anticipated gain.
arXiv Detail & Related papers (2022-05-18T08:01:33Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - To do or not to do: finding causal relations in smart homes [2.064612766965483]
This paper introduces a new way to learn causal models from a mixture of experiments on the environment and observational data.
The core of our method is the use of selected interventions, especially our learning takes into account the variables where it is impossible to intervene.
We use our method on a smart home simulation, a use case where knowing causal relations pave the way towards explainable systems.
arXiv Detail & Related papers (2021-05-20T22:36:04Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Handling Missing Data in Decision Trees: A Probabilistic Approach [41.259097100704324]
We tackle the problem of handling missing data in decision trees by taking a probabilistic approach.
We use tractable density estimators to compute the "expected prediction" of our models.
At learning time, we fine-tune parameters of already learned trees by minimizing their "expected prediction loss"
arXiv Detail & Related papers (2020-06-29T19:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.