Forecasting for Swap Regret for All Downstream Agents
- URL: http://arxiv.org/abs/2402.08753v2
- Date: Sat, 15 Jun 2024 20:31:37 GMT
- Title: Forecasting for Swap Regret for All Downstream Agents
- Authors: Aaron Roth, Mirah Shi,
- Abstract summary: We study the problem of making predictions so that downstream agents who best respond to them will be guaranteed diminishing swap regret.
We show that by making predictions that are not calibrated, but are unbiased subject to a carefully selected collection of events, we can guarantee arbitrary downstream agents diminishing swap regret.
- Score: 5.068386855717017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of making predictions so that downstream agents who best respond to them will be guaranteed diminishing swap regret, no matter what their utility functions are. It has been known since Foster and Vohra (1997) that agents who best-respond to calibrated forecasts have no swap regret. Unfortunately, the best known algorithms for guaranteeing calibrated forecasts in sequential adversarial environments do so at rates that degrade exponentially with the dimension of the prediction space. In this work, we show that by making predictions that are not calibrated, but are unbiased subject to a carefully selected collection of events, we can guarantee arbitrary downstream agents diminishing swap regret at rates that substantially improve over the rates that result from calibrated forecasts -- while maintaining the appealing property that our forecasts give guarantees for any downstream agent, without our forecasting algorithm needing to know their utility function. We give separate results in the ``low'' (1 or 2) dimensional setting and the ``high'' ($> 2$) dimensional setting. In the low dimensional setting, we show how to make predictions such that all agents who best respond to our predictions have diminishing swap regret -- in 1 dimension, at the optimal $O(\sqrt{T})$ rate. In the high dimensional setting we show how to make forecasts that guarantee regret scaling at a rate of $O(T^{2/3})$ (crucially, a dimension independent exponent), under the assumption that downstream agents smoothly best respond. Our results stand in contrast to rates that derive from agents who best respond to calibrated forecasts, which have an exponential dependence on the dimension of the prediction space.
Related papers
- Does Confidence Calibration Help Conformal Prediction? [12.119612461168941]
We show that post-hoc calibration methods lead to larger prediction sets with improved calibration.
We propose a novel method, $textbfConformal Temperature Scaling$ (ConfTS), which rectifies the objective through the gap between the threshold and the non-conformity score of the ground-truth label.
arXiv Detail & Related papers (2024-02-06T19:27:48Z) - U-Calibration: Forecasting for an Unknown Agent [29.3181385170725]
We show that optimizing forecasts for a single scoring rule cannot guarantee low regret for all possible agents.
We present a new metric for evaluating forecasts that we call U-calibration, equal to the maximal regret of the sequence of forecasts.
arXiv Detail & Related papers (2023-06-30T23:05:26Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Faster online calibration without randomization: interval forecasts and
the power of two choices [43.17917448937131]
We study the problem of making calibrated probabilistic forecasts for a binary sequence generated by an adversarial nature.
Inspired by the works on the "power of two choices" and imprecise probability theory, we study a small variant of the standard online calibration problem.
arXiv Detail & Related papers (2022-04-27T17:33:23Z) - Taming Overconfident Prediction on Unlabeled Data from Hindsight [50.9088560433925]
Minimizing prediction uncertainty on unlabeled data is a key factor to achieve good performance in semi-supervised learning.
This paper proposes a dual mechanism, named ADaptive Sharpening (ADS), which first applies a soft-threshold to adaptively mask out determinate and negligible predictions.
ADS significantly improves the state-of-the-art SSL methods by making it a plug-in.
arXiv Detail & Related papers (2021-12-15T15:17:02Z) - Propagating State Uncertainty Through Trajectory Forecasting [34.53847097769489]
Trajectory forecasting is surrounded by uncertainty as its inputs are produced by (noisy) upstream perception.
Most trajectory forecasting methods do not account for upstream uncertainty, instead taking only the most-likely values.
We present a novel method for incorporating perceptual state uncertainty in trajectory forecasting, a key component of which is a new statistical distance-based loss function.
arXiv Detail & Related papers (2021-10-07T08:51:16Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - Heterogeneous-Agent Trajectory Forecasting Incorporating Class
Uncertainty [54.88405167739227]
We present HAICU, a method for heterogeneous-agent trajectory forecasting that explicitly incorporates agents' class probabilities.
We additionally present PUP, a new challenging real-world autonomous driving dataset.
We demonstrate that incorporating class probabilities in trajectory forecasting significantly improves performance in the face of uncertainty.
arXiv Detail & Related papers (2021-04-26T10:28:34Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration [107.15813002403905]
Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
arXiv Detail & Related papers (2020-11-15T08:22:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.