Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration
- URL: http://arxiv.org/abs/2011.07476v2
- Date: Tue, 2 Mar 2021 06:03:57 GMT
- Title: Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration
- Authors: Shengjia Zhao, Stefano Ermon
- Abstract summary: Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
- Score: 107.15813002403905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision makers often need to rely on imperfect probabilistic forecasts.
While average performance metrics are typically available, it is difficult to
assess the quality of individual forecasts and the corresponding utilities. To
convey confidence about individual predictions to decision-makers, we propose a
compensation mechanism ensuring that the forecasted utility matches the
actually accrued utility. While a naive scheme to compensate decision-makers
for prediction errors can be exploited and might not be sustainable in the long
run, we propose a mechanism based on fair bets and online learning that
provably cannot be exploited. We demonstrate an application showing how
passengers could confidently optimize individual travel plans based on flight
delay probabilities estimated by an airline.
Related papers
- Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets [14.478233576808876]
In decision support systems based on prediction sets, there is a trade-off between accuracy and causalfactual harm.
We show that under a natural, unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using predictions made by humans on their own.
We also show that, under a weaker assumption, which can be verified, we can bound how frequently a system may cause harm again using only predictions made by humans on their own.
arXiv Detail & Related papers (2024-06-10T18:00:00Z) - Contract Scheduling with Distributional and Multiple Advice [37.64065953072774]
Previous work has showed that a prediction on the interruption time can help improve the performance of contract-based systems.
We introduce and study more general and realistic learning-augmented settings in which the prediction is in the form of a probability distribution.
We show that the resulting system is robust to prediction errors in the distributional setting.
arXiv Detail & Related papers (2024-04-18T19:58:11Z) - Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions [80.34972679938483]
We introduce Conformal Decision Theory, a framework for producing safe autonomous decisions despite imperfect machine learning predictions.
Decisions produced by our algorithms are safe in the sense that they come with provable statistical guarantees of having low risk.
Experiments demonstrate the utility of our approach in robot motion planning around humans, automated stock trading, and robot manufacturing.
arXiv Detail & Related papers (2023-10-09T17:59:30Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Fairness-enhancing deep learning for ride-hailing demand prediction [3.911105164672852]
Short-term demand forecasting for on-demand ride-hailing services is one of the fundamental issues in intelligent transportation systems.
Previous travel demand forecasting research predominantly focused on improving prediction accuracy, ignoring fairness issues.
This study investigates how to measure, evaluate, and enhance prediction fairness between disadvantaged and privileged communities.
arXiv Detail & Related papers (2023-03-10T04:37:14Z) - Creating Probabilistic Forecasts from Arbitrary Deterministic Forecasts
using Conditional Invertible Neural Networks [0.19573380763700712]
We use a conditional Invertible Neural Network (cINN) to learn the underlying distribution of the data and then combine the uncertainty from this distribution with an arbitrary deterministic forecast.
Our approach enables the simple creation of probabilistic forecasts without complicated statistical loss functions or further assumptions.
arXiv Detail & Related papers (2023-02-03T15:11:39Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification [0.13999481573773068]
Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty.
Main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty.
In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input.
arXiv Detail & Related papers (2021-11-29T16:52:17Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.