Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration
- URL: http://arxiv.org/abs/2011.07476v2
- Date: Tue, 2 Mar 2021 06:03:57 GMT
- Title: Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration
- Authors: Shengjia Zhao, Stefano Ermon
- Abstract summary: Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
- Score: 107.15813002403905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision makers often need to rely on imperfect probabilistic forecasts.
While average performance metrics are typically available, it is difficult to
assess the quality of individual forecasts and the corresponding utilities. To
convey confidence about individual predictions to decision-makers, we propose a
compensation mechanism ensuring that the forecasted utility matches the
actually accrued utility. While a naive scheme to compensate decision-makers
for prediction errors can be exploited and might not be sustainable in the long
run, we propose a mechanism based on fair bets and online learning that
provably cannot be exploited. We demonstrate an application showing how
passengers could confidently optimize individual travel plans based on flight
delay probabilities estimated by an airline.
Related papers
- Microfoundation Inference for Strategic Prediction [26.277259491014163]
We propose a methodology for learning the distribution map that encapsulates the long-term impacts of predictive models on the population.
Specifically, we model agents' responses as a cost-utility problem and propose estimates for said cost.
We provide a rate of convergence for this proposed estimate and assess its quality through empirical demonstrations on a credit-scoring dataset.
arXiv Detail & Related papers (2024-11-13T19:37:49Z) - Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets [14.478233576808876]
In decision support systems based on prediction sets, there is a trade-off between accuracy and causalfactual harm.
We show that under a natural, unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using predictions made by humans on their own.
We also show that, under a weaker assumption, which can be verified, we can bound how frequently a system may cause harm again using only predictions made by humans on their own.
arXiv Detail & Related papers (2024-06-10T18:00:00Z) - Contract Scheduling with Distributional and Multiple Advice [37.64065953072774]
Previous work has showed that a prediction on the interruption time can help improve the performance of contract-based systems.
We introduce and study more general and realistic learning-augmented settings in which the prediction is in the form of a probability distribution.
We show that the resulting system is robust to prediction errors in the distributional setting.
arXiv Detail & Related papers (2024-04-18T19:58:11Z) - Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions [80.34972679938483]
We introduce Conformal Decision Theory, a framework for producing safe autonomous decisions despite imperfect machine learning predictions.
Decisions produced by our algorithms are safe in the sense that they come with provable statistical guarantees of having low risk.
Experiments demonstrate the utility of our approach in robot motion planning around humans, automated stock trading, and robot manufacturing.
arXiv Detail & Related papers (2023-10-09T17:59:30Z) - Creating Probabilistic Forecasts from Arbitrary Deterministic Forecasts
using Conditional Invertible Neural Networks [0.19573380763700712]
We use a conditional Invertible Neural Network (cINN) to learn the underlying distribution of the data and then combine the uncertainty from this distribution with an arbitrary deterministic forecast.
Our approach enables the simple creation of probabilistic forecasts without complicated statistical loss functions or further assumptions.
arXiv Detail & Related papers (2023-02-03T15:11:39Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification [0.13999481573773068]
Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty.
Main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty.
In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input.
arXiv Detail & Related papers (2021-11-29T16:52:17Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.