Robust support vector machines via conic optimization
- URL: http://arxiv.org/abs/2402.01797v1
- Date: Fri, 2 Feb 2024 05:42:50 GMT
- Title: Robust support vector machines via conic optimization
- Authors: Valentina Cepeda, Andr\'es G\'omez, Shaoning Han
- Abstract summary: We consider the problem of convex computational machines robust to uncertainty.
We show that the proposed estimator is competitive with the SVMs with the loss in outlier-free and better in the presence of outliers.
- Score: 0.8594140167290099
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider the problem of learning support vector machines robust to
uncertainty. It has been established in the literature that typical loss
functions, including the hinge loss, are sensible to data perturbations and
outliers, thus performing poorly in the setting considered. In contrast, using
the 0-1 loss or a suitable non-convex approximation results in robust
estimators, at the expense of large computational costs. In this paper we use
mixed-integer optimization techniques to derive a new loss function that better
approximates the 0-1 loss compared with existing alternatives, while preserving
the convexity of the learning problem. In our computational results, we show
that the proposed estimator is competitive with the standard SVMs with the
hinge loss in outlier-free regimes and better in the presence of outliers.
Related papers
- LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Asymptotic Characterisation of Robust Empirical Risk Minimisation
Performance in the Presence of Outliers [18.455890316339595]
We study robust linear regression in high-dimension, when both the dimension $d$ and the number of data points $n$ diverge with a fixed ratio $alpha=n/d$, and study a data model that includes outliers.
We provide exacts for the performances of the empirical risk minimisation (ERM) using $ell$-regularised $ell$, $ell_$, and Huber losses.
arXiv Detail & Related papers (2023-05-30T12:18:39Z) - Expressive Losses for Verified Robustness via Convex Combinations [67.54357965665676]
We study the relationship between the over-approximation coefficient and performance profiles across different expressive losses.
We show that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-05-23T12:20:29Z) - On User-Level Private Convex Optimization [59.75368670035683]
We introduce a new mechanism for convex optimization (SCO) with user-level differential privacy guarantees.
Our mechanism does not require any smoothness assumptions on the loss.
Our bounds are the first where the minimum number of users needed for user-level privacy has no dependence on the dimension.
arXiv Detail & Related papers (2023-05-08T17:47:28Z) - Characterizing the Optimal 0-1 Loss for Multi-class Classification with
a Test-time Attacker [57.49330031751386]
We find achievable information-theoretic lower bounds on loss in the presence of a test-time attacker for multi-class classifiers on any discrete dataset.
We provide a general framework for finding the optimal 0-1 loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints.
arXiv Detail & Related papers (2023-02-21T15:17:13Z) - Support Vector Machines with the Hard-Margin Loss: Optimal Training via
Combinatorial Benders' Cuts [8.281391209717105]
We show how to train the hard-margin SVM model to global optimality.
We introduce an iterative sampling and sub decomposition algorithm that solves the problem.
arXiv Detail & Related papers (2022-07-15T18:21:51Z) - Probabilistically Robust Learning: Balancing Average- and Worst-case
Performance [105.87195436925722]
We propose a framework called robustness probabilistic that bridges the gap between the accurate, yet brittle average case and the robust, yet conservative worst case.
From a theoretical point of view, this framework overcomes the trade-offs between the performance and the sample-complexity of worst-case and average-case learning.
arXiv Detail & Related papers (2022-02-02T17:01:38Z) - Max-Margin Contrastive Learning [120.32963353348674]
We present max-margin contrastive learning (MMCL) for unsupervised representation learning.
Our approach selects negatives as the sparse support vectors obtained via a quadratic optimization problem.
We validate our approach on standard vision benchmark datasets, demonstrating better performance in unsupervised representation learning.
arXiv Detail & Related papers (2021-12-21T18:56:54Z) - Robust Classification via Support Vector Machines [1.7520660701924717]
We propose two robust classifiers under data uncertainty.
The first is called Single Perturbation SVM (SP-SVM) and provides a constructive method by allowing a controlled perturbation to one feature of the data.
The second method is called Extreme Empirical Loss SVM (EEL-SVM) and is based on a new empirical loss estimate, namely, the Extreme Empirical Loss (EEL)
arXiv Detail & Related papers (2021-04-27T20:20:12Z) - Adversarially Robust Learning via Entropic Regularization [31.6158163883893]
We propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks.
Our approach achieves competitive (or better) performance in terms of robust classification accuracy.
arXiv Detail & Related papers (2020-08-27T18:54:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.