A Neural Framework for Generalized Causal Sensitivity Analysis
- URL: http://arxiv.org/abs/2311.16026v2
- Date: Tue, 9 Apr 2024 17:08:55 GMT
- Title: A Neural Framework for Generalized Causal Sensitivity Analysis
- Authors: Dennis Frauen, Fergus Imrie, Alicia Curth, Valentyn Melnychuk, Stefan Feuerriegel, Mihaela van der Schaar,
- Abstract summary: We propose NeuralCSA, a neural framework for causal sensitivity analysis.
We provide theoretical guarantees that NeuralCSA is able to infer valid bounds on the causal query of interest.
- Score: 78.71545648682705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unobserved confounding is common in many applications, making causal inference from observational data challenging. As a remedy, causal sensitivity analysis is an important tool to draw causal conclusions under unobserved confounding with mathematical guarantees. In this paper, we propose NeuralCSA, a neural framework for generalized causal sensitivity analysis. Unlike previous work, our framework is compatible with (i) a large class of sensitivity models, including the marginal sensitivity model, f-sensitivity models, and Rosenbaum's sensitivity model; (ii) different treatment types (i.e., binary and continuous); and (iii) different causal queries, including (conditional) average treatment effects and simultaneous effects on multiple outcomes. The generality of NeuralCSA is achieved by learning a latent distribution shift that corresponds to a treatment intervention using two conditional normalizing flows. We provide theoretical guarantees that NeuralCSA is able to infer valid bounds on the causal query of interest and also demonstrate this empirically using both simulated and real-world data.
Related papers
- Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Sensitivity-Aware Amortized Bayesian Inference [8.753065246797561]
Sensitivity analyses reveal the influence of various modeling choices on the outcomes of statistical analyses.
We propose sensitivity-aware amortized Bayesian inference (SA-ABI), a multifaceted approach to integrate sensitivity analyses into simulation-based inference with neural networks.
We demonstrate the effectiveness of our method in applied modeling problems, ranging from disease outbreak dynamics and global warming thresholds to human decision-making.
arXiv Detail & Related papers (2023-10-17T10:14:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Sharp Bounds for Generalized Causal Sensitivity Analysis [30.77874108094485]
We propose a unified framework for causal sensitivity analysis under unobserved confounding.
This includes (conditional) average treatment effects, effects for mediation analysis and path analysis, and distributional effects.
Our bounds for (conditional) average treatment effects coincide with recent optimality results for causal sensitivity analysis.
arXiv Detail & Related papers (2023-05-26T14:44:32Z) - Non-parametric identifiability and sensitivity analysis of synthetic
control models [1.4610038284393165]
We study synthetic control models in Pearl's structural causal model framework.
We provide a general framework for sensitivity analysis of synthetic control causal inference to violations of the assumptions underlying non-parametric identifiability.
arXiv Detail & Related papers (2023-01-18T17:02:16Z) - Bayesian Models of Functional Connectomics and Behavior [0.0]
We present a fully bayesian formulation for joint representation learning and prediction.
We present preliminary results on a subset of a publicly available clinical rs-fMRI study on patients with Autism Spectrum Disorder.
arXiv Detail & Related papers (2023-01-15T20:42:31Z) - Neural Dependencies Emerging from Learning Massive Categories [94.77992221690742]
This work presents two astonishing findings on neural networks learned for large-scale image classification.
1) Given a well-trained model, the logits predicted for some category can be directly obtained by linearly combining the predictions of a few other categories.
2) Neural dependencies exist not only within a single model, but even between two independently learned models.
arXiv Detail & Related papers (2022-11-21T09:42:15Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Causal Inference in Geoscience and Remote Sensing from Observational
Data [9.800027003240674]
We try to estimate the correct direction of causation using a finite set of empirical data.
We illustrate performance in a collection of 28 geoscience causal inference problems.
The criterion achieves state-of-the-art detection rates in all cases, it is generally robust to noise sources and distortions.
arXiv Detail & Related papers (2020-12-07T22:56:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.