Near-Optimal Multi-Perturbation Experimental Design for Causal Structure
Learning
- URL: http://arxiv.org/abs/2105.14024v1
- Date: Fri, 28 May 2021 18:00:00 GMT
- Title: Near-Optimal Multi-Perturbation Experimental Design for Causal Structure
Learning
- Authors: Scott Sussex (1), Andreas Krause (1), Caroline Uhler (2) ((1)
Department of Computer Science, ETH Z\"urich, (2) Laboratory for Information
& Decision Systems, Massachusetts Institute of Technology)
- Abstract summary: Causal structures can be learnt by performing experiments on the system of interest.
We address the largely unexplored problem of designing experiments that simultaneously intervene on multiple variables.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal structure learning is a key problem in many domains. Causal structures
can be learnt by performing experiments on the system of interest. We address
the largely unexplored problem of designing experiments that simultaneously
intervene on multiple variables. While potentially more informative than the
commonly considered single-variable interventions, selecting such interventions
is algorithmically much more challenging, due to the doubly-exponential
combinatorial search space over sets of composite interventions. In this paper,
we develop efficient algorithms for optimizing different objective functions
quantifying the informativeness of experiments. By establishing novel
submodularity properties of these objectives, we provide approximation
guarantees for our algorithms. Our algorithms empirically perform superior to
both random interventions and algorithms that only select single-variable
interventions.
Related papers
- Query-Efficient Correlation Clustering with Noisy Oracle [17.11782578276788]
We introduce two novel formulations of online learning problems rooted in the paradigm of Pure Exploration in Combinatorial Multi-Armed Bandits (PE-CMAB)
We design algorithms that combine a sampling strategy with a classic approximation algorithm for correlation and study their theoretical guarantees.
Our results are the first examples of clustering-time algorithms that work for the case of PE-CMAB in which the underlying offline optimization problem is NP-hard.
arXiv Detail & Related papers (2024-02-02T13:31:24Z) - Batch Bayesian Optimization for Replicable Experimental Design [56.64902148159355]
Many real-world design problems evaluate multiple experimental conditions in parallel and replicate each condition multiple times due to large and heteroscedastic observation noise.
We propose the Batch Thompson Sampling for Replicable Experimental Design framework, which encompasses three algorithms.
We show the effectiveness of our algorithms in two practical real-world applications: precision agriculture and AutoML.
arXiv Detail & Related papers (2023-11-02T12:46:03Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Active Learning for Optimal Intervention Design in Causal Models [11.294389953686945]
We develop a causal active learning strategy to identify interventions that are optimal, as measured by the discrepancy between the post-interventional mean of the distribution and a desired target mean.
We apply our approach to both synthetic data and single-cell transcriptomic data from Perturb-CITE-seq experiments to identify optimal perturbations that induce a specific cell state transition.
arXiv Detail & Related papers (2022-09-10T20:40:30Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Algorithms for Solving Nonlinear Binary Optimization Problems in Robust
Causal Inference [2.169755083801688]
We propose greedy algorithms to solve the robust causal inference test instances from observational data with continuous outcomes.
By leveraging the structure of the feasibility formulation, we develop greedy schemes that are efficient in solving robust test problems.
arXiv Detail & Related papers (2020-12-22T16:12:11Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Active Learning of Causal Structures with Deep Reinforcement Learning [13.202747831999414]
We study the problem of experiment design to learn causal structures from interventional data.
We present the first deep reinforcement learning based solution for the problem of experiment design.
arXiv Detail & Related papers (2020-09-07T10:49:06Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Learning High Order Feature Interactions with Fine Control Kernels [12.5433010409486]
We provide a methodology for learning sparse statistical models that use as features all possible multiplicative interactions.
We also introduce an algorithmic paradigm, the Fine Control Kernel framework, so named because it is based on Fenchel Duality.
arXiv Detail & Related papers (2020-02-09T06:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.