Wasserstein Distributionally Robust Inverse Multiobjective Optimization
- URL: http://arxiv.org/abs/2009.14552v1
- Date: Wed, 30 Sep 2020 10:44:07 GMT
- Title: Wasserstein Distributionally Robust Inverse Multiobjective Optimization
- Authors: Chaosheng Dong, Bo Zeng
- Abstract summary: We develop a distributionally robust inverse multiobjective optimization problem (WRO-IMOP)
We show that the excess risk of the WRO-IMOP estimator has a sub-linear convergence rate.
We demonstrate the effectiveness of our method on both a synthetic multiobjective quadratic program and a real world portfolio optimization problem.
- Score: 14.366265951396587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse multiobjective optimization provides a general framework for the
unsupervised learning task of inferring parameters of a multiobjective decision
making problem (DMP), based on a set of observed decisions from the human
expert. However, the performance of this framework relies critically on the
availability of an accurate DMP, sufficient decisions of high quality, and a
parameter space that contains enough information about the DMP. To hedge
against the uncertainties in the hypothetical DMP, the data, and the parameter
space, we investigate in this paper the distributionally robust approach for
inverse multiobjective optimization. Specifically, we leverage the Wasserstein
metric to construct a ball centered at the empirical distribution of these
decisions. We then formulate a Wasserstein distributionally robust inverse
multiobjective optimization problem (WRO-IMOP) that minimizes a worst-case
expected loss function, where the worst case is taken over all distributions in
the Wasserstein ball. We show that the excess risk of the WRO-IMOP estimator
has a sub-linear convergence rate. Furthermore, we propose the semi-infinite
reformulations of the WRO-IMOP and develop a cutting-plane algorithm that
converges to an approximate solution in finite iterations. Finally, we
demonstrate the effectiveness of our method on both a synthetic multiobjective
quadratic program and a real world portfolio optimization problem.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Model-Free Robust Average-Reward Reinforcement Learning [25.125481838479256]
We focus on the robust average-reward MDPs under the model-free iteration setting.
We design two model-free algorithms, robust relative value (RVI) TD and robust RVI Q-learning, and theoretically prove their convergence to the optimal solution.
arXiv Detail & Related papers (2023-05-17T18:19:23Z) - Multistage Stochastic Optimization via Kernels [3.7565501074323224]
We develop a non-parametric, data-driven, tractable approach for solving multistage optimization problems.
We show that the proposed method produces decision rules with near-optimal average performance.
arXiv Detail & Related papers (2023-03-11T23:19:32Z) - Hedging Complexity in Generalization via a Parametric Distributionally
Robust Optimization Framework [18.6306170209029]
Empirical risk minimization (ERM) and distributionally robust optimization (DRO) are popular approaches for solving optimization problems.
We propose a simple approach in which the distribution of random perturbations is approximated using a parametric family of distributions.
We show that this new source of error can be controlled by suitable DRO formulations.
arXiv Detail & Related papers (2022-12-03T03:26:34Z) - Wasserstein Distributionally Robust Estimation in High Dimensions:
Performance Analysis and Optimal Hyperparameter Tuning [0.0]
We propose a Wasserstein distributionally robust estimation framework to estimate an unknown parameter from noisy linear measurements.
We focus on the task of analyzing the squared error performance of such estimators.
We show that the squared error can be recovered as the solution of a convex-concave optimization problem.
arXiv Detail & Related papers (2022-06-27T13:02:59Z) - Complexity-Free Generalization via Distributionally Robust Optimization [4.313143197674466]
We present an alternate route to obtain generalization bounds on the solution from distributionally robust optimization (DRO)
Our DRO bounds depend on the ambiguity set geometry and its compatibility with the true loss function.
Notably, when using maximum mean discrepancy as a DRO distance metric, our analysis implies, to the best of our knowledge, the first generalization bound in the literature that depends solely on the true loss function.
arXiv Detail & Related papers (2021-06-21T15:19:52Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.