Scalable Distributional Robustness in a Class of Non Convex Optimization
with Guarantees
- URL: http://arxiv.org/abs/2205.15624v1
- Date: Tue, 31 May 2022 09:07:01 GMT
- Title: Scalable Distributional Robustness in a Class of Non Convex Optimization
with Guarantees
- Authors: Avinandan Bose, Arunesh Sinha, Tien Mai
- Abstract summary: Distributionally robust optimization (DRO) has shown robustness in learning as well as sample based problems.
We propose a mixed-integer clustering program (MISOCP) which does not scale enough to solve problems with real worldsets.
- Score: 7.541571634887807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distributionally robust optimization (DRO) has shown lot of promise in
providing robustness in learning as well as sample based optimization problems.
We endeavor to provide DRO solutions for a class of sum of fractionals,
non-convex optimization which is used for decision making in prominent areas
such as facility location and security games. In contrast to previous work, we
find it more tractable to optimize the equivalent variance regularized form of
DRO rather than the minimax form. We transform the variance regularized form to
a mixed-integer second order cone program (MISOCP), which, while guaranteeing
near global optimality, does not scale enough to solve problems with real world
data-sets. We further propose two abstraction approaches based on clustering
and stratified sampling to increase scalability, which we then use for real
world data-sets. Importantly, we provide near global optimality guarantees for
our approach and show experimentally that our solution quality is better than
the locally optimal ones achieved by state-of-the-art gradient-based methods.
We experimentally compare our different approaches and baselines, and reveal
nuanced properties of a DRO solution.
Related papers
- Distributionally and Adversarially Robust Logistic Regression via Intersecting Wasserstein Balls [8.720733751119994]
Adversarially robust optimization (ARO) has become the de facto standard for training models to defend against adversarial attacks during testing.
Despite their robustness, these models often suffer from severe overfitting.
We propose two approaches to replace the empirical distribution in training with: (i) a worst-case distribution within an ambiguity set; or (ii) a mixture of the empirical distribution with one derived from an auxiliary dataset.
arXiv Detail & Related papers (2024-07-18T15:59:37Z) - PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback [106.63518036538163]
We present a novel unified bilevel optimization-based framework, textsfPARL, formulated to address the recently highlighted critical issue of policy alignment in reinforcement learning.
Our framework addressed these concerns by explicitly parameterizing the distribution of the upper alignment objective (reward design) by the lower optimal variable.
Our empirical results substantiate that the proposed textsfPARL can address the alignment concerns in RL by showing significant improvements.
arXiv Detail & Related papers (2023-08-03T18:03:44Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Federated Distributionally Robust Optimization with Non-Convex
Objectives: Algorithm and Analysis [24.64654924173679]
Asynchronous distributed algorithm named Asynchronous Single-looP alternatIve gRadient projEction is proposed.
New uncertainty set, i.e., constrained D-norm uncertainty set, is developed to leverage the prior distribution and flexibly control the degree of robustness.
empirical studies on real-world datasets demonstrate that the proposed method can not only achieve fast convergence, but also remain robust against data as well as malicious attacks.
arXiv Detail & Related papers (2023-07-25T01:56:57Z) - Smoothed $f$-Divergence Distributionally Robust Optimization [5.50764401597583]
We argue that a special type of distributionallly robust optimization (DRO) formulation offers theoretical advantages.
DRO uses an ambiguity set based on a Kullback Leibler (KL) divergence smoothed by the Wasserstein or L'evy-Prokhorov (LP) distance.
arXiv Detail & Related papers (2023-06-24T19:22:01Z) - Hedging Complexity in Generalization via a Parametric Distributionally
Robust Optimization Framework [18.6306170209029]
Empirical risk minimization (ERM) and distributionally robust optimization (DRO) are popular approaches for solving optimization problems.
We propose a simple approach in which the distribution of random perturbations is approximated using a parametric family of distributions.
We show that this new source of error can be controlled by suitable DRO formulations.
arXiv Detail & Related papers (2022-12-03T03:26:34Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Distributed Distributionally Robust Optimization with Non-Convex
Objectives [24.64654924173679]
Asynchronous distributed algorithm named Asynchronous Single-looP alternatIve gRadient projEction is proposed.
New uncertainty set, i.e., constrained D-norm uncertainty set, is developed to leverage the prior distribution and flexibly control the degree of robustness.
empirical studies on real-world datasets demonstrate that the proposed method can not only achieve fast convergence, but also remain robust against data as well as malicious attacks.
arXiv Detail & Related papers (2022-10-14T07:39:13Z) - When AUC meets DRO: Optimizing Partial AUC for Deep Learning with
Non-Convex Convergence Guarantee [51.527543027813344]
We propose systematic and efficient gradient-based methods for both one-way and two-way partial AUC (pAUC)
For both one-way and two-way pAUC, we propose two algorithms and prove their convergence for optimizing their two formulations, respectively.
arXiv Detail & Related papers (2022-03-01T01:59:53Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Distributionally Robust Bayesian Optimization [121.71766171427433]
We present a novel distributionally robust Bayesian optimization algorithm (DRBO) for zeroth-order, noisy optimization.
Our algorithm provably obtains sub-linear robust regret in various settings.
We demonstrate the robust performance of our method on both synthetic and real-world benchmarks.
arXiv Detail & Related papers (2020-02-20T22:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.