Robustifying Binary Classification to Adversarial Perturbation
- URL: http://arxiv.org/abs/2010.15391v1
- Date: Thu, 29 Oct 2020 07:20:37 GMT
- Title: Robustifying Binary Classification to Adversarial Perturbation
- Authors: Fariborz Salehi, Babak Hassibi
- Abstract summary: In this paper we consider the problem of binary classification with adversarial perturbations.
We introduce a generalization to the max-margin classifier which takes into account the power of the adversary in manipulating the data.
Under some mild assumptions on the loss function, we theoretically show that the gradient descents converge to the RM classifier in its direction.
- Score: 45.347651499585055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the enormous success of machine learning models in various
applications, most of these models lack resilience to (even small)
perturbations in their input data. Hence, new methods to robustify machine
learning models seem very essential. To this end, in this paper we consider the
problem of binary classification with adversarial perturbations. Investigating
the solution to a min-max optimization (which considers the worst-case loss in
the presence of adversarial perturbations) we introduce a generalization to the
max-margin classifier which takes into account the power of the adversary in
manipulating the data. We refer to this classifier as the "Robust Max-margin"
(RM) classifier. Under some mild assumptions on the loss function, we
theoretically show that the gradient descent iterates (with sufficiently small
step size) converge to the RM classifier in its direction. Therefore, the RM
classifier can be studied to compute various performance measures (e.g.
generalization error) of binary classification with adversarial perturbations.
Related papers
- Regularized Linear Regression for Binary Classification [20.710343135282116]
Regularized linear regression is a promising approach for binary classification problems in which the training set has noisy labels.
We show that for large enough regularization strength, the optimal weights concentrate around two values of opposite sign.
We observe that in many cases the corresponding "compression" of each weight to a single bit leads to very little loss in performance.
arXiv Detail & Related papers (2023-11-03T23:18:21Z) - Extension of Transformational Machine Learning: Classification Problems [0.0]
This study explores the application and performance of Transformational Machine Learning (TML) in drug discovery.
TML, a meta learning algorithm, excels in exploiting common attributes across various domains.
The drug discovery process, which is complex and time-consuming, can benefit greatly from the enhanced prediction accuracy.
arXiv Detail & Related papers (2023-08-07T07:34:18Z) - Characterizing the Optimal 0-1 Loss for Multi-class Classification with
a Test-time Attacker [57.49330031751386]
We find achievable information-theoretic lower bounds on loss in the presence of a test-time attacker for multi-class classifiers on any discrete dataset.
We provide a general framework for finding the optimal 0-1 loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints.
arXiv Detail & Related papers (2023-02-21T15:17:13Z) - Riemannian classification of EEG signals with missing values [67.90148548467762]
This paper proposes two strategies to handle missing data for the classification of electroencephalograms.
The first approach estimates the covariance from imputed data with the $k$-nearest neighbors algorithm; the second relies on the observed data by leveraging the observed-data likelihood within an expectation-maximization algorithm.
As results show, the proposed strategies perform better than the classification based on observed data and allow to keep a high accuracy even when the missing data ratio increases.
arXiv Detail & Related papers (2021-10-19T14:24:50Z) - Benign Overfitting in Multiclass Classification: All Roads Lead to
Interpolation [39.02017410837255]
We study benign overfitting in multiclass linear classification.
We consider the following training algorithms on separable data.
We derive novel bounds on the accuracy of the MNI classifier.
arXiv Detail & Related papers (2021-06-21T05:34:36Z) - Learning Gaussian Mixtures with Generalised Linear Models: Precise
Asymptotics in High-dimensions [79.35722941720734]
Generalised linear models for multi-class classification problems are one of the fundamental building blocks of modern machine learning tasks.
We prove exacts characterising the estimator in high-dimensions via empirical risk minimisation.
We discuss how our theory can be applied beyond the scope of synthetic data.
arXiv Detail & Related papers (2021-06-07T16:53:56Z) - On the Error Resistance of Hinge Loss Minimization [30.808062097285706]
We identify a set of conditions on the data under which surrogate loss minimization algorithms provably learn the correct classifier.
In particular, we show that if the data is linearly classifiable with a slightly non-trivial margin, then surrogate loss minimization has negligible error on the uncorrupted data.
arXiv Detail & Related papers (2020-12-02T06:49:24Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Asymptotic Behavior of Adversarial Training in Binary Classification [41.7567932118769]
Adversarial training is considered to be the state-of-the-art method for defense against adversarial attacks.
Despite being successful in practice, several problems in understanding performance of adversarial training remain open.
We derive precise theoretical predictions for the minimization of adversarial training in binary classification.
arXiv Detail & Related papers (2020-10-26T01:44:20Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.