Robust Losses for Decision-Focused Learning
- URL: http://arxiv.org/abs/2310.04328v2
- Date: Sat, 27 Jul 2024 13:40:56 GMT
- Title: Robust Losses for Decision-Focused Learning
- Authors: Noah Schutte, Krzysztof Postek, Neil Yorke-Smith,
- Abstract summary: Decision-focused learning aims at training the predictive model to minimize regret by making a suboptimal decision.
empirical regret can be an ineffective surrogate because empirical optimal decisions can vary substantially from expected optimal decisions.
We propose three novel loss functions that approximate expected regret more robustly.
- Score: 2.9652474178611405
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Optimization models used to make discrete decisions often contain uncertain parameters that are context-dependent and estimated through prediction. To account for the quality of the decision made based on the prediction, decision-focused learning (end-to-end predict-then-optimize) aims at training the predictive model to minimize regret, i.e., the loss incurred by making a suboptimal decision. Despite the challenge of the gradient of this loss w.r.t. the predictive model parameters being zero almost everywhere for optimization problems with a linear objective, effective gradient-based learning approaches have been proposed to minimize the expected loss, using the empirical loss as a surrogate. However, empirical regret can be an ineffective surrogate because empirical optimal decisions can vary substantially from expected optimal decisions. To understand the impact of this deficiency, we evaluate the effect of aleatoric and epistemic uncertainty on the accuracy of empirical regret as a surrogate. Next, we propose three novel loss functions that approximate expected regret more robustly. Experimental results show that training two state-of-the-art decision-focused learning approaches using robust regret losses improves test-sample empirical regret in general while keeping computational time equivalent relative to the number of training epochs.
Related papers
- Smart Predict-then-Optimize Method with Dependent Data: Risk Bounds and Calibration of Autoregression [7.369846475695131]
We present an autoregressive SPO method directly targeting the optimization problem at the decision stage.
We conduct experiments to demonstrate the effectiveness of the SPO+ surrogate compared to the absolute loss and the least squares loss.
arXiv Detail & Related papers (2024-11-19T17:02:04Z) - Uncertainty-Penalized Direct Preference Optimization [52.387088396044206]
We develop a pessimistic framework for DPO by introducing preference uncertainty penalization schemes.
The penalization serves as a correction to the loss which attenuates the loss gradient for uncertain samples.
We show improved overall performance compared to vanilla DPO, as well as better completions on prompts from high-uncertainty chosen/rejected responses.
arXiv Detail & Related papers (2024-10-26T14:24:37Z) - Asymptotically Optimal Regret for Black-Box Predict-then-Optimize [7.412445894287709]
We study new black-box predict-then-optimize problems that lack special structure and where we only observe the reward from the action taken.
We present a novel loss function, which we call Empirical Soft Regret (ESR), designed to significantly improve reward when used in training.
We also show our approach significantly outperforms state-of-the-art algorithms on real-world decision-making problems in news recommendation and personalized healthcare.
arXiv Detail & Related papers (2024-06-12T04:46:23Z) - Optimistic Regret Bounds for Online Learning in Adversarial Markov Decision Processes [5.116582735311639]
We introduce and study a new variant of AMDP, which aims to minimize regret while utilizing a set of cost predictors.
We develop a new policy search method that achieves a sublinear optimistic regret with high probability, that is a regret bound which gracefully degrades with the estimation power of the cost predictors.
arXiv Detail & Related papers (2024-05-03T15:44:31Z) - Decision-focused predictions via pessimistic bilevel optimization: a computational study [0.7499722271664147]
Uncertainty in optimization parameters is an important and longstanding challenge.
We build predictive models that measure a emphregret measure on decisions taken with them.
We show various computational techniques to achieve tractability.
arXiv Detail & Related papers (2023-12-29T15:05:00Z) - Leaving the Nest: Going Beyond Local Loss Functions for
Predict-Then-Optimize [57.22851616806617]
We show that our method achieves state-of-the-art results in four domains from the literature.
Our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.
arXiv Detail & Related papers (2023-05-26T11:17:45Z) - Post-hoc loss-calibration for Bayesian neural networks [25.05373000435213]
We develop methods for correcting approximate posterior predictive distributions encouraging them to prefer high-utility decisions.
In contrast to previous work, our approach is agnostic to the choice of the approximate inference algorithm.
arXiv Detail & Related papers (2021-06-13T13:53:27Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - A Regret Minimization Approach to Iterative Learning Control [61.37088759497583]
We propose a new performance metric, planning regret, which replaces the standard uncertainty assumptions with worst case regret.
We provide theoretical and empirical evidence that the proposed algorithm outperforms existing methods on several benchmarks.
arXiv Detail & Related papers (2021-02-26T13:48:49Z) - On the Loss Landscape of Adversarial Training: Identifying Challenges
and How to Overcome Them [57.957466608543676]
We analyze the influence of adversarial training on the loss landscape of machine learning models.
We show that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients.
arXiv Detail & Related papers (2020-06-15T13:50:23Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.