Constrained Causal Bayesian Optimization
- URL: http://arxiv.org/abs/2305.20011v1
- Date: Wed, 31 May 2023 16:34:58 GMT
- Title: Constrained Causal Bayesian Optimization
- Authors: Virginia Aglietti, Alan Malek, Ira Ktena, Silvia Chiappa
- Abstract summary: cCBO first reduces the search space by exploiting the graph structure and, if available, an observational dataset.
We evaluate cCBO on artificial and real-world causal graphs showing successful trade off between fast convergence and percentage of feasible interventions.
- Score: 9.409281517596396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose constrained causal Bayesian optimization (cCBO), an approach for
finding interventions in a known causal graph that optimize a target variable
under some constraints. cCBO first reduces the search space by exploiting the
graph structure and, if available, an observational dataset; and then solves
the restricted optimization problem by modelling target and constraint
quantities using Gaussian processes and by sequentially selecting interventions
via a constrained expected improvement acquisition function. We propose
different surrogate models that enable to integrate observational and
interventional data while capturing correlation among effects with increasing
levels of sophistication. We evaluate cCBO on artificial and real-world causal
graphs showing successful trade off between fast convergence and percentage of
feasible interventions.
Related papers
- Graph Agnostic Causal Bayesian Optimisation [2.624902795082451]
We study the problem of globally optimising a target variable of an unknown causal graph on which a sequence of soft or hard interventions can be performed.
We propose Graph Agnostic Causal Bayesian optimisation (GACBO), an algorithm that actively discovers the causal structure that contributes to achieving optimal rewards.
We show our proposed algorithm outperforms baselines in simulated experiments and real-world applications.
arXiv Detail & Related papers (2024-11-05T11:49:33Z) - Efficient Differentiable Discovery of Causal Order [14.980926991441342]
Intersort is a score-based method to discover causal order of variables.
We reformulate Intersort using differentiable sorting and ranking techniques.
Our work opens the door to efficiently incorporating regularization for causal order into the training of differentiable models.
arXiv Detail & Related papers (2024-10-11T13:11:55Z) - Model-based Causal Bayesian Optimization [74.78486244786083]
We introduce the first algorithm for Causal Bayesian Optimization with Multiplicative Weights (CBO-MW)
We derive regret bounds for CBO-MW that naturally depend on graph-related quantities.
Our experiments include a realistic demonstration of how CBO-MW can be used to learn users' demand patterns in a shared mobility system.
arXiv Detail & Related papers (2023-07-31T13:02:36Z) - Functional Causal Bayesian Optimization [21.67333624383642]
fCBO is a method for finding interventions that optimize a target variable in a known causal graph.
We introduce graphical criteria that establish when considering functional interventions, and conditions under which selected interventions are also optimal for conditional target effects.
arXiv Detail & Related papers (2023-06-10T11:02:53Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Active Learning for Optimal Intervention Design in Causal Models [11.294389953686945]
We develop a causal active learning strategy to identify interventions that are optimal, as measured by the discrepancy between the post-interventional mean of the distribution and a desired target mean.
We apply our approach to both synthetic data and single-cell transcriptomic data from Perturb-CITE-seq experiments to identify optimal perturbations that induce a specific cell state transition.
arXiv Detail & Related papers (2022-09-10T20:40:30Z) - Optimization-Induced Graph Implicit Nonlinear Diffusion [64.39772634635273]
We propose a new kind of graph convolution variants, called Graph Implicit Diffusion (GIND)
GIND implicitly has access to infinite hops of neighbors while adaptively aggregating features with nonlinear diffusion to prevent over-smoothing.
We show that the learned representation can be formalized as the minimizer of an explicit convex optimization objective.
arXiv Detail & Related papers (2022-06-29T06:26:42Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Causal Bayesian Optimization [8.958125394444679]
We study the problem of globally optimizing a variable of interest that is part of a causal model in which a sequence of interventions can be performed.
Our approach combines ideas from causal inference, uncertainty quantification and sequential decision making.
We show how knowing the causal graph significantly improves the ability to reason about optimal decision making strategies.
arXiv Detail & Related papers (2020-05-24T13:20:50Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.