Incentivizing honest performative predictions with proper scoring rules
- URL: http://arxiv.org/abs/2305.17601v2
- Date: Tue, 30 May 2023 17:20:13 GMT
- Title: Incentivizing honest performative predictions with proper scoring rules
- Authors: Caspar Oesterheld, Johannes Treutlein, Emery Cooper, Rubi Hudson
- Abstract summary: We say a prediction is a fixed point if it accurately reflects the expert's beliefs after that prediction has been made.
We show that, for binary predictions, if the influence of the expert's prediction on outcomes is bounded, it is possible to define scoring rules under which optimal reports are arbitrarily close to fixed points.
- Score: 4.932130498861987
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Proper scoring rules incentivize experts to accurately report beliefs,
assuming predictions cannot influence outcomes. We relax this assumption and
investigate incentives when predictions are performative, i.e., when they can
influence the outcome of the prediction, such as when making public predictions
about the stock market. We say a prediction is a fixed point if it accurately
reflects the expert's beliefs after that prediction has been made. We show that
in this setting, reports maximizing expected score generally do not reflect an
expert's beliefs, and we give bounds on the inaccuracy of such reports. We show
that, for binary predictions, if the influence of the expert's prediction on
outcomes is bounded, it is possible to define scoring rules under which optimal
reports are arbitrarily close to fixed points. However, this is impossible for
predictions over more than two outcomes. We also perform numerical simulations
in a toy setting, showing that our bounds are tight in some situations and that
prediction error is often substantial (greater than 5-10%). Lastly, we discuss
alternative notions of optimality, including performative stability, and show
that they incentivize reporting fixed points.
Related papers
- Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Making Decisions under Outcome Performativity [9.962472413291803]
We introduce a new optimality concept -- performative omniprediction.
A performative omnipredictor is a single predictor that simultaneously encodes the optimal decision rule.
We show that efficient performative omnipredictors exist, under a natural restriction of performative prediction.
arXiv Detail & Related papers (2022-10-04T17:04:47Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - How to "Improve" Prediction Using Behavior Modification [0.0]
Data science researchers design algorithms, models, and approaches to improve prediction.
Predictive accuracy is improved with larger and richer data.
platforms can stealthily achieve better prediction accuracy by pushing users' behaviors towards their predicted values.
Our derivation elucidates implications of such behavior modification to data scientists, platforms, their customers, and the humans whose behavior is manipulated.
arXiv Detail & Related papers (2020-08-26T12:39:35Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - Measuring Forecasting Skill from Text [15.795144936579627]
We explore connections between the language people use to describe their predictions and their forecasting skill.
We present a number of linguistic metrics which are computed over text associated with people's predictions about the future.
We demonstrate that it is possible to accurately predict forecasting skill using a model that is based solely on language.
arXiv Detail & Related papers (2020-06-12T19:04:10Z) - Malicious Experts versus the multiplicative weights algorithm in online
prediction [85.62472761361107]
We consider a prediction problem with two experts and a forecaster.
We assume that one of the experts is honest and makes correct prediction with probability $mu$ at each round.
The other one is malicious, who knows true outcomes at each round and makes predictions in order to maximize the loss of the forecaster.
arXiv Detail & Related papers (2020-03-18T20:12:08Z) - Performative Prediction [31.876692592395777]
We develop a framework for performative prediction bringing together concepts from statistics, game theory, and causality.
A conceptual novelty is an equilibrium notion we call performative stability.
Our main results are necessary and sufficient conditions for the convergence of retraining to a performatively stable point of nearly minimal loss.
arXiv Detail & Related papers (2020-02-16T20:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.