Deep Learning Methods for Proximal Inference via Maximum Moment
Restriction
- URL: http://arxiv.org/abs/2205.09824v1
- Date: Thu, 19 May 2022 19:51:42 GMT
- Title: Deep Learning Methods for Proximal Inference via Maximum Moment
Restriction
- Authors: Benjamin Kompa and David R. Bellamy and Thomas Kolokotrones and James
M. Robins and Andrew L. Beam
- Abstract summary: We introduce a flexible and scalable method based on a deep neural network to estimate causal effects in the presence of unmeasured confounding.
Our method achieves state of the art performance on two well-established proximal inference benchmarks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The No Unmeasured Confounding Assumption is widely used to identify causal
effects in observational studies. Recent work on proximal inference has
provided alternative identification results that succeed even in the presence
of unobserved confounders, provided that one has measured a sufficiently rich
set of proxy variables, satisfying specific structural conditions. However,
proximal inference requires solving an ill-posed integral equation. Previous
approaches have used a variety of machine learning techniques to estimate a
solution to this integral equation, commonly referred to as the bridge
function. However, prior work has often been limited by relying on
pre-specified kernel functions, which are not data adaptive and struggle to
scale to large datasets. In this work, we introduce a flexible and scalable
method based on a deep neural network to estimate causal effects in the
presence of unmeasured confounding using proximal inference. Our method
achieves state of the art performance on two well-established proximal
inference benchmarks. Finally, we provide theoretical consistency guarantees
for our method.
Related papers
- A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation [0.0]
Advancements in image segmentation play an integral role within the greater scope of Deep Learning-based computer vision.
Uncertainty quantification has been extensively studied within this context, enabling expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision making.
This work provides a comprehensive overview of probabilistic segmentation by discussing fundamental concepts in uncertainty that govern advancements in the field and the application to various tasks.
arXiv Detail & Related papers (2024-11-25T13:26:09Z) - Combining Statistical Depth and Fermat Distance for Uncertainty Quantification [3.3975558777609915]
We measure the Out-of-domain uncertainty in the prediction of Neural Networks using a statistical notion called Lens Depth'' (LD) combined with Fermat Distance.
The proposed method gives excellent qualitative result on toy datasets and can give competitive or better uncertainty estimation on standard deep learning datasets.
arXiv Detail & Related papers (2024-04-12T13:54:21Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Residual Overfit Method of Exploration [78.07532520582313]
We propose an approximate exploration methodology based on fitting only two point estimates, one tuned and one overfit.
The approach drives exploration towards actions where the overfit model exhibits the most overfitting compared to the tuned model.
We compare ROME against a set of established contextual bandit methods on three datasets and find it to be one of the best performing.
arXiv Detail & Related papers (2021-10-06T17:05:33Z) - Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment
Restriction [39.51144507601913]
We focus on the proximal causal learning setting, but our methods can be used to solve a wider class of inverse problems characterised by a Fredholm integral equation.
We provide consistency guarantees for each algorithm, and we demonstrate these approaches achieve competitive results on synthetic data and data simulating a real-world task.
arXiv Detail & Related papers (2021-05-10T17:52:48Z) - A Class of Algorithms for General Instrumental Variable Models [29.558215059892206]
Causal treatment effect estimation is a key problem that arises in a variety of real-world settings.
We provide a method for causal effect bounding in continuous distributions.
arXiv Detail & Related papers (2020-06-11T12:32:24Z) - Towards Certified Robustness of Distance Metric Learning [53.96113074344632]
We advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms.
We show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness.
arXiv Detail & Related papers (2020-06-10T16:51:53Z) - MissDeepCausal: Causal Inference from Incomplete Data Using Deep Latent
Variable Models [14.173184309520453]
State-of-the-art methods for causal inference don't consider missing values.
Missing data require an adapted unconfoundedness hypothesis.
Latent confounders whose distribution is learned through variational autoencoders adapted to missing values are considered.
arXiv Detail & Related papers (2020-02-25T12:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.