Adversarial Classification: Necessary conditions and geometric flows
- URL: http://arxiv.org/abs/2011.10797v2
- Date: Fri, 11 Mar 2022 18:46:40 GMT
- Title: Adversarial Classification: Necessary conditions and geometric flows
- Authors: Nicolas Garcia Trillos and Ryan Murray
- Abstract summary: We study a version of adversarial classification where an adversary is empowered to corrupt data inputs up to some distance $varepsilon$.
We derive a geometric evolution equation which can be used to track the change in classification boundaries as $varepsilon$ varies.
- Score: 0.7614628596146599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study a version of adversarial classification where an adversary is
empowered to corrupt data inputs up to some distance $\varepsilon$, using tools
from variational analysis. In particular, we describe necessary conditions
associated with the optimal classifier subject to such an adversary. Using the
necessary conditions, we derive a geometric evolution equation which can be
used to track the change in classification boundaries as $\varepsilon$ varies.
This evolution equation may be described as an uncoupled system of differential
equations in one dimension, or as a mean curvature type equation in higher
dimension. In one dimension, and under mild assumptions on the data
distribution, we rigorously prove that one can use the initial value problem
starting from $\varepsilon=0$, which is simply the Bayes classifier, in order
to solve for the global minimizer of the adversarial problem for small values
of $\varepsilon$. In higher dimensions we provide a similar result, albeit
conditional to the existence of regular solutions of the initial value problem.
In the process of proving our main results we obtain a result of independent
interest connecting the original adversarial problem with an optimal transport
problem under no assumptions on whether classes are balanced or not. Numerical
examples illustrating these ideas are also presented.
Related papers
- On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds [11.30047438005394]
This work investigates the question of how to choose the regularization norm $lVert cdot rVert$ in the context of high-dimensional adversarial training for binary classification.
We quantitatively characterize the relationship between perturbation size and the optimal choice of $lVert cdot rVert$, confirming the intuition that, in the data scarce regime, the type of regularization becomes increasingly important for adversarial training as perturbations grow in size.
arXiv Detail & Related papers (2024-10-21T14:53:12Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Trading off Consistency and Dimensionality of Convex Surrogates for the
Mode [6.096888891865663]
In multiclass classification over $n$ outcomes, the outcomes must be embedded into the reals with dimension at least $n-1$.
We investigate ways to trade off surrogate loss dimension, the number of problem instances, and restricting the region of consistency in the simplex.
We show that full-dimensional subsets of the simplex exist around each point mass distribution for which consistency holds, but also, with less than $n-1$ dimensions, there exist distributions for which a phenomenon called hallucination occurs.
arXiv Detail & Related papers (2024-02-16T16:42:09Z) - Tempered Calculus for ML: Application to Hyperbolic Model Embedding [70.61101116794549]
Most mathematical distortions used in ML are fundamentally integral in nature.
In this paper, we unveil a grounded theory and tools which can help improve these distortions to better cope with ML requirements.
We show how to apply it to a problem that has recently gained traction in ML: hyperbolic embeddings with a "cheap" and accurate encoding along the hyperbolic vsean scale.
arXiv Detail & Related papers (2024-02-06T17:21:06Z) - Comparison of Single- and Multi- Objective Optimization Quality for
Evolutionary Equation Discovery [77.34726150561087]
Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions.
The proposed comparison approach is shown on classical model examples -- Burgers equation, wave equation, and Korteweg - de Vries equation.
arXiv Detail & Related papers (2023-06-29T15:37:19Z) - Deep Neural Networks for Nonparametric Interaction Models with Diverging
Dimension [6.939768185086753]
We analyze a $kth$ order nonparametric interaction model in both growing dimension scenarios ($d$ grows with $n$ but at a slower rate) and in high dimension ($d gtrsim n$)
We show that under certain standard assumptions, debiased deep neural networks achieve a minimax optimal rate both in terms of $(n, d)$.
arXiv Detail & Related papers (2023-02-12T04:19:39Z) - A Primal-Dual Approach to Solving Variational Inequalities with General Constraints [54.62996442406718]
Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities.
We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(frac1sqrtK)$ when the operator is $L$-Lipschitz and monotone.
arXiv Detail & Related papers (2022-10-27T17:59:09Z) - Localization, Convexity, and Star Aggregation [0.0]
Offset Rademacher complexities have been shown to imply sharp, linear-dependent upper bounds for the square loss.
We show that in the statistical setting, the offset bound can be generalized to any loss satisfying certain uniform convexity.
arXiv Detail & Related papers (2021-05-19T00:47:59Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z) - A Precise High-Dimensional Asymptotic Theory for Boosting and
Minimum-$\ell_1$-Norm Interpolated Classifiers [3.167685495996986]
This paper establishes a precise high-dimensional theory for boosting on separable data.
Under a class of statistical models, we provide an exact analysis of the universality error of boosting.
We also explicitly pin down the relation between the boosting test error and the optimal Bayes error.
arXiv Detail & Related papers (2020-02-05T00:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.