Complexity-Free Generalization via Distributionally Robust Optimization
- URL: http://arxiv.org/abs/2106.11180v1
- Date: Mon, 21 Jun 2021 15:19:52 GMT
- Title: Complexity-Free Generalization via Distributionally Robust Optimization
- Authors: Henry Lam, Yibo Zeng
- Abstract summary: We present an alternate route to obtain generalization bounds on the solution from distributionally robust optimization (DRO)
Our DRO bounds depend on the ambiguity set geometry and its compatibility with the true loss function.
Notably, when using maximum mean discrepancy as a DRO distance metric, our analysis implies, to the best of our knowledge, the first generalization bound in the literature that depends solely on the true loss function.
- Score: 4.313143197674466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Established approaches to obtain generalization bounds in data-driven
optimization and machine learning mostly build on solutions from empirical risk
minimization (ERM), which depend crucially on the functional complexity of the
hypothesis class. In this paper, we present an alternate route to obtain these
bounds on the solution from distributionally robust optimization (DRO), a
recent data-driven optimization framework based on worst-case analysis and the
notion of ambiguity set to capture statistical uncertainty. In contrast to the
hypothesis class complexity in ERM, our DRO bounds depend on the ambiguity set
geometry and its compatibility with the true loss function. Notably, when using
maximum mean discrepancy as a DRO distance metric, our analysis implies, to the
best of our knowledge, the first generalization bound in the literature that
depends solely on the true loss function, entirely free of any complexity
measures or bounds on the hypothesis class.
Related papers
- Generalization Bounds of Surrogate Policies for Combinatorial Optimization Problems [61.580419063416734]
A recent stream of structured learning approaches has improved the practical state of the art for a range of optimization problems.
The key idea is to exploit the statistical distribution over instances instead of dealing with instances separately.
In this article, we investigate methods that smooth the risk by perturbing the policy, which eases optimization and improves the generalization error.
arXiv Detail & Related papers (2024-07-24T12:00:30Z) - Distributionally and Adversarially Robust Logistic Regression via Intersecting Wasserstein Balls [8.720733751119994]
Adversarially robust optimization (ARO) has become the de facto standard for training models to defend against adversarial attacks during testing.
Despite their robustness, these models often suffer from severe overfitting.
We propose two approaches to replace the empirical distribution in training with: (i) a worst-case distribution within an ambiguity set; or (ii) a mixture of the empirical distribution with one derived from an auxiliary dataset.
arXiv Detail & Related papers (2024-07-18T15:59:37Z) - Federated Distributionally Robust Optimization with Non-Convex
Objectives: Algorithm and Analysis [24.64654924173679]
Asynchronous distributed algorithm named Asynchronous Single-looP alternatIve gRadient projEction is proposed.
New uncertainty set, i.e., constrained D-norm uncertainty set, is developed to leverage the prior distribution and flexibly control the degree of robustness.
empirical studies on real-world datasets demonstrate that the proposed method can not only achieve fast convergence, but also remain robust against data as well as malicious attacks.
arXiv Detail & Related papers (2023-07-25T01:56:57Z) - Hedging Complexity in Generalization via a Parametric Distributionally
Robust Optimization Framework [18.6306170209029]
Empirical risk minimization (ERM) and distributionally robust optimization (DRO) are popular approaches for solving optimization problems.
We propose a simple approach in which the distribution of random perturbations is approximated using a parametric family of distributions.
We show that this new source of error can be controlled by suitable DRO formulations.
arXiv Detail & Related papers (2022-12-03T03:26:34Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Federated Distributionally Robust Optimization for Phase Configuration
of RISs [106.4688072667105]
We study the problem of robust reconfigurable intelligent surface (RIS)-aided downlink communication over heterogeneous RIS types in a supervised learning setting.
By modeling downlink communication over heterogeneous RIS designs as different workers that learn how to optimize phase configurations in a distributed manner, we solve this distributed learning problem.
Our proposed algorithm requires fewer communication rounds to achieve the same worst-case distribution test accuracy compared to competitive baselines.
arXiv Detail & Related papers (2021-08-20T07:07:45Z) - Distributionally Robust Learning [11.916893752969429]
This book develops a comprehensive statistical learning framework that is robust to (distributional) perturbations in the data.
A tractable DRO relaxation for each problem is being derived, establishing a connection between bounds and regularization.
Beyond theory, we include numerical experiments and case studies using synthetic and real data.
arXiv Detail & Related papers (2021-08-20T04:14:18Z) - A general sample complexity analysis of vanilla policy gradient [101.16957584135767]
Policy gradient (PG) is one of the most popular reinforcement learning (RL) problems.
"vanilla" theoretical understanding of PG trajectory is one of the most popular methods for solving RL problems.
arXiv Detail & Related papers (2021-07-23T19:38:17Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - From Majorization to Interpolation: Distributionally Robust Learning
using Kernel Smoothing [1.2891210250935146]
We study the function approximation aspect of distributionally robust optimization (DRO) based on probability metrics.
This paper instead proposes robust learning algorithms based on smooth function approximation and convolution.
arXiv Detail & Related papers (2021-02-16T22:25:18Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.