Towards Consistency in Adversarial Classification
- URL: http://arxiv.org/abs/2205.10022v1
- Date: Fri, 20 May 2022 08:30:06 GMT
- Title: Towards Consistency in Adversarial Classification
- Authors: Laurent Meunier, Rapha\"el Ettedgui, Rafael Pinot, Yann Chevaleyre,
Jamal Atif
- Abstract summary: We study the problem of consistency in the context of adversarial examples.
We show that no convex surrogate loss can be consistent or calibrated in this context.
- Score: 17.91058673844592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of consistency in the context of
adversarial examples. Specifically, we tackle the following question: can
surrogate losses still be used as a proxy for minimizing the $0/1$ loss in the
presence of an adversary that alters the inputs at test-time? Different from
the standard classification task, this question cannot be reduced to a
point-wise minimization problem, and calibration needs not to be sufficient to
ensure consistency. In this paper, we expose some pathological behaviors
specific to the adversarial problem, and show that no convex surrogate loss can
be consistent or calibrated in this context. It is therefore necessary to
design another class of surrogate functions that can be used to solve the
adversarial consistency issue. As a first step towards designing such a class,
we identify sufficient and necessary conditions for a surrogate loss to be
calibrated in both the adversarial and standard settings. Finally, we give some
directions for building a class of losses that could be consistent in the
adversarial framework.
Related papers
- Rethinking Early Stopping: Refine, Then Calibrate [49.966899634962374]
We show that calibration error and refinement error are not minimized simultaneously during training.
We introduce a new metric for early stopping and hyper parameter tuning that makes it possible to minimize refinement error during training.
Our method integrates seamlessly with any architecture and consistently improves performance across diverse classification tasks.
arXiv Detail & Related papers (2025-01-31T15:03:54Z) - A Universal Growth Rate for Learning with Smooth Surrogate Losses [30.389055604165222]
We prove a square-root growth rate near zero for smooth margin-based surrogate losses in binary classification.
We extend this analysis to multi-class classification with a series of novel results.
arXiv Detail & Related papers (2024-05-09T17:59:55Z) - The Adversarial Consistency of Surrogate Risks for Binary Classification [20.03511985572199]
adversarial training seeks to minimize the expected $0$-$1$ loss when each example can be maliciously corrupted within a small ball.
We give a simple and complete characterization of the set of surrogate loss functions that are consistent.
Our results reveal that the class of adversarially consistent surrogates is substantially smaller than in the standard setting.
arXiv Detail & Related papers (2023-05-17T05:27:40Z) - An Embedding Framework for the Design and Analysis of Consistent
Polyhedral Surrogates [17.596501992526477]
We study the design of convex surrogate loss functions via embeddings, for problems such as classification, ranking, or structured links.
An embedding gives rise to a consistent link function as well as a consistent link function.
Our results are constructive, as we illustrate several examples.
arXiv Detail & Related papers (2022-06-29T15:16:51Z) - Constrained Classification and Policy Learning [0.0]
We study consistency of surrogate loss procedures under a constrained set of classifiers.
We show that hinge losses are the only surrogate losses that preserve consistency in second-best scenarios.
arXiv Detail & Related papers (2021-06-24T10:43:00Z) - Lower-bounded proper losses for weakly supervised classification [73.974163801142]
We discuss the problem of weakly supervised learning of classification, in which instances are given weak labels.
We derive a representation theorem for proper losses in supervised learning, which dualizes the Savage representation.
We experimentally demonstrate the effectiveness of our proposed approach, as compared to improper or unbounded losses.
arXiv Detail & Related papers (2021-03-04T08:47:07Z) - A Symmetric Loss Perspective of Reliable Machine Learning [87.68601212686086]
We review how a symmetric loss can yield robust classification from corrupted labels in balanced error rate (BER) minimization.
We demonstrate how the robust AUC method can benefit natural language processing in the problem where we want to learn only from relevant keywords.
arXiv Detail & Related papers (2021-01-05T06:25:47Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - On Lower Bounds for Standard and Robust Gaussian Process Bandit
Optimization [55.937424268654645]
We consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm.
We provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability.
arXiv Detail & Related papers (2020-08-20T03:48:14Z) - Calibrated Surrogate Losses for Adversarially Robust Classification [92.37268323142307]
We show that no convex surrogate loss is respect with respect to adversarial 0-1 loss when restricted to linear models.
We also show that if the underlying distribution satisfies the Massart's noise condition, convex losses can also be calibrated in the adversarial setting.
arXiv Detail & Related papers (2020-05-28T02:40:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.