Density Ratio-based Proxy Causal Learning Without Density Ratios
- URL: http://arxiv.org/abs/2503.08371v1
- Date: Tue, 11 Mar 2025 12:27:54 GMT
- Title: Density Ratio-based Proxy Causal Learning Without Density Ratios
- Authors: Bariscan Bozkurt, Ben Deaner, Dimitri Meunier, Liyuan Xu, Arthur Gretton,
- Abstract summary: We address the setting of Proxy Causal Learning (PCL), which has the goal of estimating causal effects from observed data in the presence of hidden confounding.<n>Two approaches have been proposed to perform causal effect estimation given proxy variables.<n>We propose a practical and effective implementation of the second approach, which bypasses explicit density ratio estimation and is suitable for continuous and high-dimensional treatments.
- Score: 26.49087216375106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the setting of Proxy Causal Learning (PCL), which has the goal of estimating causal effects from observed data in the presence of hidden confounding. Proxy methods accomplish this task using two proxy variables related to the latent confounder: a treatment proxy (related to the treatment) and an outcome proxy (related to the outcome). Two approaches have been proposed to perform causal effect estimation given proxy variables; however only one of these has found mainstream acceptance, since the other was understood to require density ratio estimation - a challenging task in high dimensions. In the present work, we propose a practical and effective implementation of the second approach, which bypasses explicit density ratio estimation and is suitable for continuous and high-dimensional treatments. We employ kernel ridge regression to derive estimators, resulting in simple closed-form solutions for dose-response and conditional dose-response curves, along with consistency guarantees. Our methods empirically demonstrate superior or comparable performance to existing frameworks on synthetic and real-world datasets.
Related papers
- Automating the Selection of Proxy Variables of Unmeasured Confounders [16.773841751009748]
We extend the existing proxy variable estimator to accommodate scenarios where multiple unmeasured confounders exist between the treatments and the outcome.
We propose two data-driven methods for the selection of proxy variables and for the unbiased estimation of causal effects.
arXiv Detail & Related papers (2024-05-25T08:53:49Z) - Doubly Robust Proximal Causal Learning for Continuous Treatments [56.05592840537398]
We propose a kernel-based doubly robust causal learning estimator for continuous treatments.
We show that its oracle form is a consistent approximation of the influence function.
We then provide a comprehensive convergence analysis in terms of the mean square error.
arXiv Detail & Related papers (2023-09-22T12:18:53Z) - Kernel Single Proxy Control for Deterministic Confounding [32.70182383946395]
We show that a single proxy variable is sufficient for causal estimation if the outcome is generated deterministically.
We prove and empirically demonstrate that we can successfully recover the causal effect on challenging synthetic benchmarks.
arXiv Detail & Related papers (2023-08-08T21:11:06Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Causal Inference under Data Restrictions [0.0]
This dissertation focuses on modern causal inference under uncertainty and data restrictions.
It includes applications to neoadjuvant clinical trials, distributed data networks, and robust individualized decision making.
arXiv Detail & Related papers (2023-01-20T20:14:32Z) - Deep Learning Methods for Proximal Inference via Maximum Moment
Restriction [0.0]
We introduce a flexible and scalable method based on a deep neural network to estimate causal effects in the presence of unmeasured confounding.
Our method achieves state of the art performance on two well-established proximal inference benchmarks.
arXiv Detail & Related papers (2022-05-19T19:51:42Z) - Deterministic and Discriminative Imitation (D2-Imitation): Revisiting
Adversarial Imitation for Sample Efficiency [61.03922379081648]
We propose an off-policy sample efficient approach that requires no adversarial training or min-max optimization.
Our empirical results show that D2-Imitation is effective in achieving good sample efficiency, outperforming several off-policy extension approaches of adversarial imitation.
arXiv Detail & Related papers (2021-12-11T19:36:19Z) - Deep Proxy Causal Learning and its Application to Confounded Bandit Policy Evaluation [26.47311758786421]
Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding.
We propose a novel method for PCL, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships.
arXiv Detail & Related papers (2021-06-07T18:36:13Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment
Restriction [39.51144507601913]
We focus on the proximal causal learning setting, but our methods can be used to solve a wider class of inverse problems characterised by a Fredholm integral equation.
We provide consistency guarantees for each algorithm, and we demonstrate these approaches achieve competitive results on synthetic data and data simulating a real-world task.
arXiv Detail & Related papers (2021-05-10T17:52:48Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with
Latent Confounders [62.54431888432302]
We study an OPE problem in an infinite-horizon, ergodic Markov decision process with unobserved confounders.
We show how, given only a latent variable model for states and actions, policy value can be identified from off-policy data.
arXiv Detail & Related papers (2020-07-27T22:19:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.