Counterfactual Predictions under Runtime Confounding
- URL: http://arxiv.org/abs/2006.16916v2
- Date: Fri, 16 Apr 2021 01:29:44 GMT
- Title: Counterfactual Predictions under Runtime Confounding
- Authors: Amanda Coston, Edward H. Kennedy, Alexandra Chouldechova
- Abstract summary: We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
- Score: 74.90756694584839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithms are commonly used to predict outcomes under a particular decision
or intervention, such as predicting whether an offender will succeed on parole
if placed under minimal supervision. Generally, to learn such counterfactual
prediction models from observational data on historical decisions and
corresponding outcomes, one must measure all factors that jointly affect the
outcomes and the decision taken. Motivated by decision support applications, we
study the counterfactual prediction task in the setting where all relevant
factors are captured in the historical data, but it is either undesirable or
impermissible to use some such factors in the prediction model. We refer to
this setting as runtime confounding. We propose a doubly-robust procedure for
learning counterfactual prediction models in this setting. Our theoretical
analysis and experimental results suggest that our method often outperforms
competing approaches. We also present a validation procedure for evaluating the
performance of counterfactual prediction methods.
Related papers
- Microfoundation Inference for Strategic Prediction [26.277259491014163]
We propose a methodology for learning the distribution map that encapsulates the long-term impacts of predictive models on the population.
Specifically, we model agents' responses as a cost-utility problem and propose estimates for said cost.
We provide a rate of convergence for this proposed estimate and assess its quality through empirical demonstrations on a credit-scoring dataset.
arXiv Detail & Related papers (2024-11-13T19:37:49Z) - Deconfounding Time Series Forecasting [1.5967186772129907]
Time series forecasting is a critical task in various domains, where accurate predictions can drive informed decision-making.
Traditional forecasting methods often rely on current observations of variables to predict future outcomes.
We propose an enhanced forecasting approach that incorporates representations of latent confounders derived from historical data.
arXiv Detail & Related papers (2024-10-27T12:45:42Z) - Robust Design and Evaluation of Predictive Algorithms under Unobserved Confounding [2.8498944632323755]
We propose a unified framework for the robust design and evaluation of predictive algorithms in selectively observed data.
We impose general assumptions on how much the outcome may vary on average between unselected and selected units.
We develop debiased machine learning estimators for the bounds on a large class of predictive performance estimands.
arXiv Detail & Related papers (2022-12-19T20:41:44Z) - Predicting from Predictions [18.393971232725015]
We study how causal effects of predictions on outcomes can be identified from observational data.
We show that supervised learning that predict from predictions can find transferable functional relationships between features, predictions, and outcomes.
arXiv Detail & Related papers (2022-08-15T16:57:02Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Performance metrics for intervention-triggering prediction models do not
reflect an expected reduction in outcomes from using the model [71.9860741092209]
Clinical researchers often select among and evaluate risk prediction models.
Standard metrics calculated from retrospective data are only related to model utility under certain assumptions.
When predictions are delivered repeatedly throughout time, the relationship between standard metrics and utility is further complicated.
arXiv Detail & Related papers (2020-06-02T16:26:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.