Observational and Interventional Causal Learning for Regret-Minimizing
Control
- URL: http://arxiv.org/abs/2212.02435v1
- Date: Mon, 5 Dec 2022 17:23:59 GMT
- Title: Observational and Interventional Causal Learning for Regret-Minimizing
Control
- Authors: Christian Reiser
- Abstract summary: We explore how observational and interventional causal discovery methods can be combined.
A state-of-the-art observational causal discovery algorithm for time series, called LPCMCI, is extended to profit from casual constraints found through randomized control trials.
Numerical results show that, given perfect interventional constraints, the reconstructed structural causal models (SCMs) of the extended LPCMCI allow 84.6% of the time for the optimal prediction of the target variable.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore how observational and interventional causal discovery methods can
be combined. A state-of-the-art observational causal discovery algorithm for
time series capable of handling latent confounders and contemporaneous effects,
called LPCMCI, is extended to profit from casual constraints found through
randomized control trials. Numerical results show that, given perfect
interventional constraints, the reconstructed structural causal models (SCMs)
of the extended LPCMCI allow 84.6% of the time for the optimal prediction of
the target variable. The implementation of interventional and observational
causal discovery is modular, allowing causal constraints from other sources.
The second part of this thesis investigates the question of regret minimizing
control by simultaneously learning a causal model and planning actions through
the causal model. The idea is that an agent to optimize a measured variable
first learns the system's mechanics through observational causal discovery. The
agent then intervenes on the most promising variable with randomized values
allowing for the exploitation and generation of new interventional data. The
agent then uses the interventional data to enhance the causal model further,
allowing improved actions the next time.
The extended LPCMCI can be favorable compared to the original LPCMCI
algorithm. The numerical results show that detecting and using interventional
constraints leads to reconstructed SCMs that allow 60.9% of the time for the
optimal prediction of the target variable in contrast to the baseline of 53.6%
when using the original LPCMCI algorithm. Furthermore, the induced average
regret decreases from 1.2 when using the original LPCMCI algorithm to 1.0 when
using the extended LPCMCI algorithm with interventional discovery.
Related papers
- SEDMamba: Enhancing Selective State Space Modelling with Bottleneck Mechanism and Fine-to-Coarse Temporal Fusion for Efficient Error Detection in Robot-Assisted Surgery [7.863539113283565]
We propose a novel hierarchical model named SEDMamba, which incorporates the selective state space model (SSM) into surgical error detection.
SEDMamba enhances selective SSM with bottleneck mechanism and fine-to-coarse temporal fusion (FCTF) to detect and temporally localize surgical errors in long videos.
FCTF utilizes multiple dilated 1D convolutional layers to merge temporal information across diverse scale ranges, accommodating errors of varying durations.
arXiv Detail & Related papers (2024-06-22T19:20:35Z) - Monitoring machine learning (ML)-based risk prediction algorithms in the
presence of confounding medical interventions [4.893345190925178]
Performance monitoring of machine learning (ML)-based risk prediction models in healthcare is complicated by the issue of confounding medical interventions (CMI)
A simple approach is to ignore CMI and monitor only the untreated patients, whose outcomes remain unaltered.
We show that valid inference is still possible if one monitors conditional performance and if either conditional exchangeability or time-constant selection bias hold.
arXiv Detail & Related papers (2022-11-17T18:54:34Z) - Active Learning for Optimal Intervention Design in Causal Models [11.294389953686945]
We develop a causal active learning strategy to identify interventions that are optimal, as measured by the discrepancy between the post-interventional mean of the distribution and a desired target mean.
We apply our approach to both synthetic data and single-cell transcriptomic data from Perturb-CITE-seq experiments to identify optimal perturbations that induce a specific cell state transition.
arXiv Detail & Related papers (2022-09-10T20:40:30Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - MissDAG: Causal Discovery in the Presence of Missing Data with
Continuous Additive Noise Models [78.72682320019737]
We develop a general method, which we call MissDAG, to perform causal discovery from data with incomplete observations.
MissDAG maximizes the expected likelihood of the visible part of observations under the expectation-maximization framework.
We demonstrate the flexibility of MissDAG for incorporating various causal discovery algorithms and its efficacy through extensive simulations and real data experiments.
arXiv Detail & Related papers (2022-05-27T09:59:46Z) - Interventions, Where and How? Experimental Design for Causal Models at
Scale [47.63842422086614]
Causal discovery from observational and interventional data is challenging due to limited data and non-identifiability.
In this paper, we incorporate recent advances in Bayesian causal discovery into the Bayesian optimal experimental design framework.
We demonstrate the performance of the proposed method on synthetic graphs for both linear and nonlinear SCMs as well as on the in-silico single-cell gene regulatory network dataset, DREAM.
arXiv Detail & Related papers (2022-03-03T20:59:04Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Statistical control for spatio-temporal MEG/EEG source imaging with
desparsified multi-task Lasso [102.84915019938413]
Non-invasive techniques like magnetoencephalography (MEG) or electroencephalography (EEG) offer promise of non-invasive techniques.
The problem of source localization, or source imaging, poses however a high-dimensional statistical inference challenge.
We propose an ensemble of desparsified multi-task Lasso (ecd-MTLasso) to deal with this problem.
arXiv Detail & Related papers (2020-09-29T21:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.