Ensembled Prediction Intervals for Causal Outcomes Under Hidden
Confounding
- URL: http://arxiv.org/abs/2306.09520v2
- Date: Wed, 1 Nov 2023 05:00:04 GMT
- Title: Ensembled Prediction Intervals for Causal Outcomes Under Hidden
Confounding
- Authors: Myrl G. Marmarelis, Greg Ver Steeg, Aram Galstyan, Fred Morstatter
- Abstract summary: We present a simple approach to partial identification using existing causal sensitivity models and show empirically that Caus-Modens gives tighter outcome intervals.
The last of our three diverse benchmarks is a novel usage of GPT-4 for observational experiments with unknown but probeable ground truth.
- Score: 49.1865229301561
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal inference of exact individual treatment outcomes in the presence of
hidden confounders is rarely possible. Recent work has extended prediction
intervals with finite-sample guarantees to partially identifiable causal
outcomes, by means of a sensitivity model for hidden confounding. In deep
learning, predictors can exploit their inductive biases for better
generalization out of sample. We argue that the structure inherent to a deep
ensemble should inform a tighter partial identification of the causal outcomes
that they predict. We therefore introduce an approach termed Caus-Modens, for
characterizing causal outcome intervals by modulated ensembles. We present a
simple approach to partial identification using existing causal sensitivity
models and show empirically that Caus-Modens gives tighter outcome intervals,
as measured by the necessary interval size to achieve sufficient coverage. The
last of our three diverse benchmarks is a novel usage of GPT-4 for
observational experiments with unknown but probeable ground truth.
Related papers
- Valid causal inference with unobserved confounding in high-dimensional
settings [0.0]
We show how valid semiparametric inference can be obtained in the presence of unobserved confounders and high-dimensional nuisance models.
We propose uncertainty intervals which allow for unobserved confounding, and show that the resulting inference is valid when the amount of unobserved confounding is small.
arXiv Detail & Related papers (2024-01-12T13:21:20Z) - Model-Agnostic Covariate-Assisted Inference on Partially Identified Causal Effects [1.9253333342733674]
Many causal estimands are only partially identifiable since they depend on the unobservable joint distribution between potential outcomes.
We propose a unified and model-agnostic inferential approach for a wide class of partially identified estimands.
arXiv Detail & Related papers (2023-10-12T08:17:30Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Testing for Overfitting [0.0]
We discuss the overfitting problem and explain why standard and concentration results do not hold for evaluation with training data.
We introduce and argue for a hypothesis test by means of which both model performance may be evaluated using training data.
arXiv Detail & Related papers (2023-05-09T22:49:55Z) - Uncertainty Estimates of Predictions via a General Bias-Variance
Decomposition [7.811916700683125]
We introduce a bias-variance decomposition for proper scores, giving rise to the Bregman Information as the variance term.
We showcase the practical relevance of this decomposition on several downstream tasks, including model ensembles and confidence regions.
arXiv Detail & Related papers (2022-10-21T21:24:37Z) - BaCaDI: Bayesian Causal Discovery with Unknown Interventions [118.93754590721173]
BaCaDI operates in the continuous space of latent probabilistic representations of both causal structures and interventions.
In experiments on synthetic causal discovery tasks and simulated gene-expression data, BaCaDI outperforms related methods in identifying causal structures and intervention targets.
arXiv Detail & Related papers (2022-06-03T16:25:48Z) - Predicting Unreliable Predictions by Shattering a Neural Network [145.3823991041987]
Piecewise linear neural networks can be split into subfunctions.
Subfunctions have their own activation pattern, domain, and empirical error.
Empirical error for the full network can be written as an expectation over subfunctions.
arXiv Detail & Related papers (2021-06-15T18:34:41Z) - Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding [101.35070661471124]
We show that unobserved confounding leaves a characteristic footprint in the observed data distribution that allows for disentangling spurious and causal effects.
We propose an adjusted score-based causal discovery algorithm that may be implemented with general-purpose solvers and scales to high-dimensional problems.
arXiv Detail & Related papers (2021-03-28T11:07:59Z) - Closeness and Uncertainty Aware Adversarial Examples Detection in
Adversarial Machine Learning [0.7734726150561088]
We explore and assess the usage of 2 different groups of metrics in detecting adversarial samples.
We introduce a new feature for adversarial detection, and we show that the performances of all these metrics heavily depend on the strength of the attack being used.
arXiv Detail & Related papers (2020-12-11T14:44:59Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.