Specifying and Solving Robust Empirical Risk Minimization Problems Using CVXPY
- URL: http://arxiv.org/abs/2306.05649v3
- Date: Sat, 14 Sep 2024 17:40:48 GMT
- Title: Specifying and Solving Robust Empirical Risk Minimization Problems Using CVXPY
- Authors: Eric Luxenberg, Dhruv Malik, Yuanzhi Li, Aarti Singh, Stephen Boyd,
- Abstract summary: In some simple cases, such problems can be expressed in an analytical form.
In general the problem can be made tractable via dualization, which turns a min-max problem into a min-min problem.
We demonstrate how CVXPY can be used to automate this dualization procedure in a user-friendly manner.
- Score: 39.47096356027248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider robust empirical risk minimization (ERM), where model parameters are chosen to minimize the worst-case empirical loss when each data point varies over a given convex uncertainty set. In some simple cases, such problems can be expressed in an analytical form. In general the problem can be made tractable via dualization, which turns a min-max problem into a min-min problem. Dualization requires expertise and is tedious and error-prone. We demonstrate how CVXPY can be used to automate this dualization procedure in a user-friendly manner. Our framework allows practitioners to specify and solve robust ERM problems with a general class of convex losses, capturing many standard regression and classification problems. Users can easily specify any complex uncertainty set that is representable via disciplined convex programming (DCP) constraints.
Related papers
- Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - On the Variance, Admissibility, and Stability of Empirical Risk
Minimization [80.26309576810844]
Empirical Risk Minimization (ERM) with squared loss may attain minimax suboptimal error rates.
We show that under mild assumptions, the suboptimality of ERM must be due to large bias rather than variance.
We also show that our estimates imply stability of ERM, complementing the main result of Caponnetto and Rakhlin (2006) for non-Donsker classes.
arXiv Detail & Related papers (2023-05-29T15:25:48Z) - Empirical Risk Minimization with Relative Entropy Regularization [6.815730801645783]
The empirical risk minimization (ERM) problem with relative entropy regularization (ERM-RER) is investigated.
The solution to this problem, if it exists, is shown to be a unique probability measure, mutually absolutely continuous with the reference measure.
For a fixed dataset and under a specific condition, the empirical risk is shown to be a sub-Gaussian random variable.
arXiv Detail & Related papers (2022-11-12T09:41:02Z) - A Novel Plug-and-Play Approach for Adversarially Robust Generalization [38.72514422694518]
We propose a robust framework that employs adversarially robust training to safeguard the ML models against perturbed testing data.
Our contributions can be seen from both computational and statistical perspectives.
arXiv Detail & Related papers (2022-08-19T17:02:55Z) - The Geometry of Adversarial Training in Binary Classification [1.2891210250935143]
We establish an equivalence between a family of adversarial training problems for non-parametric binary classification and a family of regularized risk minimization problems.
The resulting regularized risk minimization problems admit exact convex relaxations of the type $L1+$ (nonlocal) $operatornameTV$.
arXiv Detail & Related papers (2021-11-26T17:19:50Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Stability and Generalization of Stochastic Gradient Methods for Minimax
Problems [71.60601421935844]
Many machine learning problems can be formulated as minimax problems such as Generative Adversarial Networks (GANs)
We provide a comprehensive generalization analysis of examples from training gradient methods for minimax problems.
arXiv Detail & Related papers (2021-05-08T22:38:00Z) - On the Minimal Error of Empirical Risk Minimization [90.09093901700754]
We study the minimal error of the Empirical Risk Minimization (ERM) procedure in the task of regression.
Our sharp lower bounds shed light on the possibility (or impossibility) of adapting to simplicity of the model generating the data.
arXiv Detail & Related papers (2021-02-24T04:47:55Z) - Tilted Empirical Risk Minimization [26.87656095874882]
We show that it is possible to flexibly tune the impact of individual losses through a straightforward extension to empirical risk minimization.
We show that TERM can increase or decrease the influence of outliers, respectively, to enable fairness or robustness.
It can also enable entirely new applications, such as simultaneously addressing outliers and promoting fairness.
arXiv Detail & Related papers (2020-07-02T14:49:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.