On the existence of solutions to adversarial training in multiclass
classification
- URL: http://arxiv.org/abs/2305.00075v2
- Date: Mon, 29 May 2023 06:19:45 GMT
- Title: On the existence of solutions to adversarial training in multiclass
classification
- Authors: Nicolas Garcia Trillos, Matt Jacobs, Jakwang Kim
- Abstract summary: We prove the existence of Borel measurable robust classifiers in three models of the problem of adversarial training in multiclass classification.
As a corollary to our results, we prove the existence of Borel measurable solutions to the agnostic adversarial training problem in the binary classification setting.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study three models of the problem of adversarial training in multiclass
classification designed to construct robust classifiers against adversarial
perturbations of data in the agnostic-classifier setting. We prove the
existence of Borel measurable robust classifiers in each model and provide a
unified perspective of the adversarial training problem, expanding the
connections with optimal transport initiated by the authors in previous work
and developing new connections between adversarial training in the multiclass
setting and total variation regularization. As a corollary of our results, we
prove the existence of Borel measurable solutions to the agnostic adversarial
training problem in the binary classification setting, a result that improves
results in the literature of adversarial training, where robust classifiers
were only known to exist within the enlarged universal $\sigma$-algebra of the
feature space.
Related papers
- Class-Incremental Mixture of Gaussians for Deep Continual Learning [15.49323098362628]
We propose end-to-end incorporation of the mixture of Gaussians model into the continual learning framework.
We show that our model can effectively learn in memory-free scenarios with fixed extractors.
arXiv Detail & Related papers (2023-07-09T04:33:19Z) - Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - Learning Sample Reweighting for Accuracy and Adversarial Robustness [15.591611864928659]
We propose a novel adversarial training framework that learns to reweight the loss associated with individual training samples based on a notion of class-conditioned margin.
Our approach consistently improves both clean and robust accuracy compared to related methods and state-of-the-art baselines.
arXiv Detail & Related papers (2022-10-20T18:25:11Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - Improving filling level classification with adversarial training [90.01594595780928]
We investigate the problem of classifying - from a single image - the level of content in a cup or a drinking glass.
We use adversarial training in a generic source dataset and then refine the training with a task-specific dataset.
We show that transfer learning with adversarial training in the source domain consistently improves the classification accuracy on the test set.
arXiv Detail & Related papers (2021-02-08T08:32:56Z) - Contradictory Structure Learning for Semi-supervised Domain Adaptation [67.89665267469053]
Current adversarial adaptation methods attempt to align the cross-domain features.
Two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.
We propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures.
arXiv Detail & Related papers (2020-02-06T22:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.