Predicting from Predictions
- URL: http://arxiv.org/abs/2208.07331v1
- Date: Mon, 15 Aug 2022 16:57:02 GMT
- Title: Predicting from Predictions
- Authors: Celestine Mendler-D\"unner, Frances Ding, Yixin Wang
- Abstract summary: We study how causal effects of predictions on outcomes can be identified from observational data.
We show that supervised learning that predict from predictions can find transferable functional relationships between features, predictions, and outcomes.
- Score: 18.393971232725015
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictions about people, such as their expected educational achievement or
their credit risk, can be performative and shape the outcome that they aim to
predict. Understanding the causal effect of these predictions on the eventual
outcomes is crucial for foreseeing the implications of future predictive models
and selecting which models to deploy. However, this causal estimation task
poses unique challenges: model predictions are usually deterministic functions
of input features and highly correlated with outcomes, which can make the
causal effects of predictions impossible to disentangle from the direct effect
of the covariates. We study this problem through the lens of causal
identifiability, and despite the hardness of this problem in full generality,
we highlight three natural scenarios where the causal effect of predictions on
outcomes can be identified from observational data: randomization in
predictions or prediction-based decisions, overparameterization of the
predictive model deployed during data collection, and discrete prediction
outputs. We show empirically that, under suitable identifiability conditions,
standard variants of supervised learning that predict from predictions can find
transferable functional relationships between features, predictions, and
outcomes, allowing for conclusions about newly deployed prediction models. Our
positive results fundamentally rely on model predictions being recorded during
data collection, bringing forward the importance of rethinking standard data
collection practices to enable progress towards a better understanding of
social outcomes and performative feedback loops.
Related papers
- Deconfounding Time Series Forecasting [1.5967186772129907]
Time series forecasting is a critical task in various domains, where accurate predictions can drive informed decision-making.
Traditional forecasting methods often rely on current observations of variables to predict future outcomes.
We propose an enhanced forecasting approach that incorporates representations of latent confounders derived from historical data.
arXiv Detail & Related papers (2024-10-27T12:45:42Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Model-agnostic variable importance for predictive uncertainty: an entropy-based approach [1.912429179274357]
We show how existing methods in explainability can be extended to uncertainty-aware models.
We demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
arXiv Detail & Related papers (2023-10-19T15:51:23Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification [0.13999481573773068]
Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty.
Main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty.
In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input.
arXiv Detail & Related papers (2021-11-29T16:52:17Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - Fast, Optimal, and Targeted Predictions using Parametrized Decision
Analysis [0.0]
We develop a class of parametrized actions for Bayesian decision analysis that produce optimal, scalable, and simple targeted predictions.
Predictions are constructed for physical activity data from the National Health and Nutrition Examination Survey.
arXiv Detail & Related papers (2020-06-23T15:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.