Deep Proxy Causal Learning and its Application to Confounded Bandit Policy Evaluation
- URL: http://arxiv.org/abs/2106.03907v5
- Date: Tue, 18 Jun 2024 08:40:30 GMT
- Title: Deep Proxy Causal Learning and its Application to Confounded Bandit Policy Evaluation
- Authors: Liyuan Xu, Heishiro Kanagawa, Arthur Gretton,
- Abstract summary: Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding.
We propose a novel method for PCL, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships.
- Score: 26.47311758786421
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding, using proxies (structured side information) for the confounder. This is achieved via two-stage regression: in the first stage, we model relations among the treatment and proxies; in the second stage, we use this model to learn the effect of treatment on the outcome, given the context provided by the proxies. PCL guarantees recovery of the true causal effect, subject to identifiability conditions. We propose a novel method for PCL, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships, as represented by deep neural network features. We show that DFPV outperforms recent state-of-the-art PCL methods on challenging synthetic benchmarks, including settings involving high dimensional image data. Furthermore, we show that PCL can be applied to off-policy evaluation for the confounded bandit problem, in which DFPV also exhibits competitive performance.
Related papers
- The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Recovering Latent Confounders from High-dimensional Proxy Variables [4.273372609646382]
We present a novel Proxy Confounder Factorization (PCF) framework for continuous treatment effect estimation.
For specific sample sizes, our two-step PCF implementation, using Independent Component Analysis (ICA-PCF), and the end-to-end implementation, using Gradient Descent (GD-PCF), achieve high correlation with the latent confounder.
Even when faced with climate data, ICA-PCF recovers four components that explain $75.9% of the variance in the North Atlantic Oscillation.
arXiv Detail & Related papers (2024-03-21T08:39:13Z) - Estimation of individual causal effects in network setup for multiple
treatments [4.53340898566495]
We study the problem of estimation of Individual Treatment Effects (ITE) in the context of multiple treatments and observational data.
We employ Graph Convolutional Networks (GCN) to learn a shared representation of the confounders.
Our approach utilizes separate neural networks to infer potential outcomes for each treatment.
arXiv Detail & Related papers (2023-12-18T06:07:45Z) - Adversarially Balanced Representation for Continuous Treatment Effect
Estimation [6.469020202994118]
In this paper, we consider the more practical and challenging scenario in which the treatment is a continuous variable.
We propose the adversarial counterfactual regression network (ACFR) that adversarially minimizes the representation imbalance in terms of KL divergence.
Our experimental evaluation on semi-synthetic datasets demonstrates the empirical superiority of ACFR over a range of state-of-the-art methods.
arXiv Detail & Related papers (2023-12-17T00:46:16Z) - Statistically Efficient Variance Reduction with Double Policy Estimation
for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning [53.97273491846883]
We propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation.
We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks.
arXiv Detail & Related papers (2023-08-28T20:46:07Z) - Kernel Single Proxy Control for Deterministic Confounding [32.70182383946395]
We show that a single proxy variable is sufficient for causal estimation if the outcome is generated deterministically.
We prove and empirically demonstrate that we can successfully recover the causal effect on challenging synthetic benchmarks.
arXiv Detail & Related papers (2023-08-08T21:11:06Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Deep Metric Learning with Soft Orthogonal Proxies [1.823505080809275]
We propose a novel approach that introduces Soft Orthogonality (SO) constraint on proxies.
Our approach leverages Data-Efficient Image Transformer (DeiT) as an encoder to extract contextual features from images along with a DML objective.
Our evaluations demonstrate the superiority of our proposed approach over state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2023-06-22T17:22:15Z) - GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP,
and Beyond [101.5329678997916]
We study sample efficient reinforcement learning (RL) under the general framework of interactive decision making.
We propose a novel complexity measure, generalized eluder coefficient (GEC), which characterizes the fundamental tradeoff between exploration and exploitation.
We show that RL problems with low GEC form a remarkably rich class, which subsumes low Bellman eluder dimension problems, bilinear class, low witness rank problems, PO-bilinear class, and generalized regular PSR.
arXiv Detail & Related papers (2022-11-03T16:42:40Z) - Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in
Partially Observed Markov Decision Processes [65.91730154730905]
In applications of offline reinforcement learning to observational data, such as in healthcare or education, a general concern is that observed actions might be affected by unobserved factors.
Here we tackle this by considering off-policy evaluation in a partially observed Markov decision process (POMDP)
We extend the framework of proximal causal inference to our POMDP setting, providing a variety of settings where identification is made possible.
arXiv Detail & Related papers (2021-10-28T17:46:14Z) - Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with
Latent Confounders [62.54431888432302]
We study an OPE problem in an infinite-horizon, ergodic Markov decision process with unobserved confounders.
We show how, given only a latent variable model for states and actions, policy value can be identified from off-policy data.
arXiv Detail & Related papers (2020-07-27T22:19:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.