Interventional Sum-Product Networks: Causal Inference with Tractable
Probabilistic Models
- URL: http://arxiv.org/abs/2102.10440v2
- Date: Tue, 23 Feb 2021 19:53:20 GMT
- Title: Interventional Sum-Product Networks: Causal Inference with Tractable
Probabilistic Models
- Authors: Matej Ze\v{c}evi\'c, Devendra Singh Dhami, Athresh Karanam, Sriraam
Natarajan and Kristian Kersting
- Abstract summary: We consider the problem of learning interventional distributions using sum-product networks (SPNs)
We provide an arbitrarily intervened causal graph as input, effectively subsuming Pearl's do-operator.
The resulting interventional SPNs are motivated and illustrated by a structural causal model themed around personal health.
- Score: 26.497268758016595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While probabilistic models are an important tool for studying causality,
doing so suffers from the intractability of inference. As a step towards
tractable causal models, we consider the problem of learning interventional
distributions using sum-product networks (SPNs) that are over-parameterized by
gate functions, e.g., neural networks. Providing an arbitrarily intervened
causal graph as input, effectively subsuming Pearl's do-operator, the gate
function predicts the parameters of the SPN. The resulting interventional SPNs
are motivated and illustrated by a structural causal model themed around
personal health. Our empirical evaluation on three benchmark data sets as well
as a synthetic health data set clearly demonstrates that interventional SPNs
indeed are both expressive in modelling and flexible in adapting to the
interventions.
Related papers
- Kernel-based estimators for functional causal effects [1.6749379740049928]
We propose causal effect estimators based on empirical Fr'echet means and operator-valued kernels, tailored to functional data spaces.
These methods address the challenges of high-dimensionality, sequential ordering, and model complexity while preserving robustness to treatment misspecification.
arXiv Detail & Related papers (2025-03-06T22:48:55Z) - Generative Intervention Models for Causal Perturbation Modeling [80.72074987374141]
In many applications, it is a priori unknown which mechanisms of a system are modified by an external perturbation.
We propose a generative intervention model (GIM) that learns to map these perturbation features to distributions over atomic interventions.
arXiv Detail & Related papers (2024-11-21T10:37:57Z) - Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - C-XGBoost: A tree boosting model for causal effect estimation [8.246161706153805]
Causal effect estimation aims at estimating the Average Treatment Effect as well as the Conditional Average Treatment Effect of a treatment to an outcome from the available data.
We propose a new causal inference model, named C-XGBoost, for the prediction of potential outcomes.
arXiv Detail & Related papers (2024-03-31T17:43:37Z) - Bayesian Causal Inference with Gaussian Process Networks [1.7188280334580197]
We consider the problem of the Bayesian estimation of the effects of hypothetical interventions in the Gaussian Process Network model.
We detail how to perform causal inference on GPNs by simulating the effect of an intervention across the whole network and propagating the effect of the intervention on downstream variables.
We extend both frameworks beyond the case of a known causal graph, incorporating uncertainty about the causal structure via Markov chain Monte Carlo methods.
arXiv Detail & Related papers (2024-02-01T14:39:59Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Causal Analysis for Robust Interpretability of Neural Networks [0.2519906683279152]
We develop a robust interventional-based method to capture cause-effect mechanisms in pre-trained neural networks.
We apply our method to vision models trained on classification tasks.
arXiv Detail & Related papers (2023-05-15T18:37:24Z) - Bayesian Networks for the robust and unbiased prediction of depression
and its symptoms utilizing speech and multimodal data [65.28160163774274]
We apply a Bayesian framework to capture the relationships between depression, depression symptoms, and features derived from speech, facial expression and cognitive game data collected at thymia.
arXiv Detail & Related papers (2022-11-09T14:48:13Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Causal Inference with Deep Causal Graphs [0.0]
Parametric causal modelling techniques rarely provide functionality for counterfactual estimation.
Deep Causal Graphs is an abstract specification of the required functionality for a neural network to model causal distributions.
We demonstrate its expressive power in modelling complex interactions and showcase applications to machine learning explainability and fairness.
arXiv Detail & Related papers (2020-06-15T13:03:33Z) - Supervised Autoencoders Learn Robust Joint Factor Models of Neural
Activity [2.8402080392117752]
neuroscience applications collect high-dimensional predictors' corresponding to brain activity in different regions along with behavioral outcomes.
Joint factor models for the predictors and outcomes are natural, but maximum likelihood estimates of these models can struggle in practice when there is model misspecification.
We propose an alternative inference strategy based on supervised autoencoders; rather than placing a probability distribution on the latent factors, we define them as an unknown function of the high-dimensional predictors.
arXiv Detail & Related papers (2020-04-10T19:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.