Active Learning of Continuous-time Bayesian Networks through
Interventions
- URL: http://arxiv.org/abs/2105.14742v1
- Date: Mon, 31 May 2021 07:13:50 GMT
- Title: Active Learning of Continuous-time Bayesian Networks through
Interventions
- Authors: Dominik Linzner and Heinz Koeppl
- Abstract summary: We consider the problem of learning structures and parameters of Continuous-time Bayesian Networks (CTBNs) from time-course data under minimal experimental resources.
We propose a novel criterion for experimental design based on a variational approximation of the expected information gain.
- Score: 38.728001040001615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of learning structures and parameters of
Continuous-time Bayesian Networks (CTBNs) from time-course data under minimal
experimental resources. In practice, the cost of generating experimental data
poses a bottleneck, especially in the natural and social sciences. A popular
approach to overcome this is Bayesian optimal experimental design (BOED).
However, BOED becomes infeasible in high-dimensional settings, as it involves
integration over all possible experimental outcomes. We propose a novel
criterion for experimental design based on a variational approximation of the
expected information gain. We show that for CTBNs, a semi-analytical expression
for this criterion can be calculated for structure and parameter learning. By
doing so, we can replace sampling over experimental outcomes by solving the
CTBNs master-equation, for which scalable approximations exist. This alleviates
the computational burden of sampling possible experimental outcomes in
high-dimensions. We employ this framework in order to recommend interventional
sequences. In this context, we extend the CTBN model to conditional CTBNs in
order to incorporate interventions. We demonstrate the performance of our
criterion on synthetic and real-world data.
Related papers
- Learning Coupled Subspaces for Multi-Condition Spike Data [8.114880112033644]
In neuroscience, researchers typically conduct experiments under multiple conditions to acquire neural responses in the form of high-dimensional spike train datasets.
We propose a non-parametric Bayesian approach to learn a smooth tuning function over the experiment condition space.
arXiv Detail & Related papers (2024-10-24T20:44:28Z) - Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Task-specific experimental design for treatment effect estimation [59.879567967089145]
Large randomised trials (RCTs) are the standard for causal inference.
Recent work has proposed more sample-efficient alternatives to RCTs, but these are not adaptable to the downstream application for which the causal effect is sought.
We develop a task-specific approach to experimental design and derive sampling strategies customised to particular downstream applications.
arXiv Detail & Related papers (2023-06-08T18:10:37Z) - Intervention Generalization: A View from Factor Graph Models [7.117681268784223]
We take a close look at how to warrant a leap from past experiments to novel conditions based on minimal assumptions about the factorization of the distribution of the manipulated system.
A postulated $textitinterventional factor model$ (IFM) may not always be informative, but it conveniently abstracts away a need for explicitly modeling unmeasured confounding and feedback mechanisms.
arXiv Detail & Related papers (2023-06-06T21:44:23Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - New Paradigms for Exploiting Parallel Experiments in Bayesian
Optimization [0.0]
We present new parallel BO paradigms that exploit the structure of the system to partition the design space.
Specifically, we propose an approach that partitions the design space by following the level sets of the performance function.
Our results show that our approaches significantly reduce the required search time and increase the probability of finding a global (rather than local) solution.
arXiv Detail & Related papers (2022-10-03T16:45:23Z) - Optimal Bayesian experimental design for subsurface flow problems [77.34726150561087]
We propose a novel approach for development of chaos expansion (PCE) surrogate model for the design utility function.
This novel technique enables the derivation of a reasonable quality response surface for the targeted objective function with a computational budget comparable to several single-point evaluations.
arXiv Detail & Related papers (2020-08-10T09:42:59Z) - Estimating the Effects of Continuous-valued Interventions using
Generative Adversarial Networks [103.14809802212535]
We build on the generative adversarial networks (GANs) framework to address the problem of estimating the effect of continuous-valued interventions.
Our model, SCIGAN, is flexible and capable of simultaneously estimating counterfactual outcomes for several different continuous interventions.
To address the challenges presented by shifting to continuous interventions, we propose a novel architecture for our discriminator.
arXiv Detail & Related papers (2020-02-27T18:46:21Z) - Bayesian Experimental Design for Implicit Models by Mutual Information
Neural Estimation [16.844481439960663]
Implicit models, where the data-generation distribution is intractable but sampling is possible, are ubiquitous in the natural sciences.
A fundamental question is how to design experiments so that the collected data are most useful.
For implicit models, however, this approach is severely hampered by the high computational cost of computing posteriors.
We show that training a neural network to maximise a lower bound on MI allows us to jointly determine the optimal design and the posterior.
arXiv Detail & Related papers (2020-02-19T12:09:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.