Decision Making under Model Misspecification: DRO with Robust Bayesian Ambiguity Sets
- URL: http://arxiv.org/abs/2505.03585v1
- Date: Tue, 06 May 2025 14:46:16 GMT
- Title: Decision Making under Model Misspecification: DRO with Robust Bayesian Ambiguity Sets
- Authors: Charita Dellaporta, Patrick O'Hara, Theodoros Damoulas,
- Abstract summary: We introduce Robust, to model misspecification, Bayesian Ambiguity Sets (DRO-RoBAS)<n>These are Maximum Mean Discrepancy ambiguity sets centred at a robust posterior predictive distribution.<n>We show that the resulting optimisation problem obtains a dual formulation in the Reproducing Kernel Hilbert Space.
- Score: 8.642152250082368
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distributionally Robust Optimisation (DRO) protects risk-averse decision-makers by considering the worst-case risk within an ambiguity set of distributions based on the empirical distribution or a model. To further guard against finite, noisy data, model-based approaches admit Bayesian formulations that propagate uncertainty from the posterior to the decision-making problem. However, when the model is misspecified, the decision maker must stretch the ambiguity set to contain the data-generating process (DGP), leading to overly conservative decisions. We address this challenge by introducing DRO with Robust, to model misspecification, Bayesian Ambiguity Sets (DRO-RoBAS). These are Maximum Mean Discrepancy ambiguity sets centred at a robust posterior predictive distribution that incorporates beliefs about the DGP. We show that the resulting optimisation problem obtains a dual formulation in the Reproducing Kernel Hilbert Space and we give probabilistic guarantees on the tolerance level of the ambiguity set. Our method outperforms other Bayesian and empirical DRO approaches in out-of-sample performance on the Newsvendor and Portfolio problems with various cases of model misspecification.
Related papers
- Learning from Noisy Labels via Conditional Distributionally Robust Optimization [5.85767711644773]
crowdsourcing has emerged as a practical solution for labeling large datasets.
It presents a significant challenge in learning accurate models due to noisy labels from annotators with varying levels of expertise.
arXiv Detail & Related papers (2024-11-26T05:03:26Z) - Decision Making under the Exponential Family: Distributionally Robust Optimisation with Bayesian Ambiguity Sets [8.642152250082368]
We introduce Distributionally Robust optimisation with Bayesian Ambiguity Sets (DRO-BAS)
DRO-BAS hedges against model uncertainty by optimising the worst-case risk over a posterior-informed ambiguity set.
We prove that both admit, under conditions, strong dual formulations leading to efficient single-stage programs.
arXiv Detail & Related papers (2024-11-25T18:49:02Z) - Distributionally Robust Optimization [8.750805813120898]
DRO studies decision problems under uncertainty where the probability distribution governing the uncertain problem parameters is itself uncertain.<n>DRO seeks decisions that perform best under the worst distribution in the ambiguity set.<n>Recent research has uncovered its deep connections to regularization techniques and adversarial training in machine learning.
arXiv Detail & Related papers (2024-11-04T19:32:24Z) - Distributionally Robust Optimisation with Bayesian Ambiguity Sets [8.642152250082368]
We introduce Distributionally Robust optimisation with Bayesian Ambiguity Sets (DRO-BAS)
DRO-BAS hedges against uncertainty in the model by optimising the worst-case risk over a posterior-informed ambiguity set.
We show that our method admits a closed-form dual representation for many exponential family members.
arXiv Detail & Related papers (2024-09-05T12:59:38Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Distributionally Robust Bayesian Optimization with $\varphi$-divergences [45.48814080654241]
We consider robustness against data-shift in $varphi$-divergences, which subsumes many popular choices, such as the Total Variation, and the extant Kullback-Leibler divergence.
We show that the DRO-BO problem in this setting is equivalent to a finite-dimensional optimization problem which, even in the continuous context setting, can be easily implemented with provable sublinear regret bounds.
arXiv Detail & Related papers (2022-03-04T04:34:52Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Residuals-based distributionally robust optimization with covariate
information [0.0]
We consider data-driven approaches that integrate a machine learning prediction model within distributionally robust optimization (DRO)
Our framework is flexible in the sense that it can accommodate a variety of learning setups and DRO ambiguity sets.
arXiv Detail & Related papers (2020-12-02T11:21:34Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z) - Distributionally Robust Bayesian Optimization [121.71766171427433]
We present a novel distributionally robust Bayesian optimization algorithm (DRBO) for zeroth-order, noisy optimization.
Our algorithm provably obtains sub-linear robust regret in various settings.
We demonstrate the robust performance of our method on both synthetic and real-world benchmarks.
arXiv Detail & Related papers (2020-02-20T22:04:30Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.