How many dimensions are required to find an adversarial example?
- URL: http://arxiv.org/abs/2303.14173v2
- Date: Tue, 11 Apr 2023 01:03:31 GMT
- Title: How many dimensions are required to find an adversarial example?
- Authors: Charles Godfrey, Henry Kvinge, Elise Bishoff, Myles Mckay, Davis
Brown, Tim Doster, and Eleanor Byler
- Abstract summary: We investigate how adversarial vulnerability depends on $dim(V)$.
In particular, we show that the adversarial success of standard PGD attacks with $ellp$ norm constraints behaves like a monotonically increasing function of $epsilon.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Past work exploring adversarial vulnerability have focused on situations
where an adversary can perturb all dimensions of model input. On the other
hand, a range of recent works consider the case where either (i) an adversary
can perturb a limited number of input parameters or (ii) a subset of modalities
in a multimodal problem. In both of these cases, adversarial examples are
effectively constrained to a subspace $V$ in the ambient input space
$\mathcal{X}$. Motivated by this, in this work we investigate how adversarial
vulnerability depends on $\dim(V)$. In particular, we show that the adversarial
success of standard PGD attacks with $\ell^p$ norm constraints behaves like a
monotonically increasing function of $\epsilon (\frac{\dim(V)}{\dim
\mathcal{X}})^{\frac{1}{q}}$ where $\epsilon$ is the perturbation budget and
$\frac{1}{p} + \frac{1}{q} =1$, provided $p > 1$ (the case $p=1$ presents
additional subtleties which we analyze in some detail). This functional form
can be easily derived from a simple toy linear model, and as such our results
land further credence to arguments that adversarial examples are endemic to
locally linear models on high dimensional spaces.
Related papers
- Trading off Consistency and Dimensionality of Convex Surrogates for the
Mode [6.096888891865663]
In multiclass classification over $n$ outcomes, the outcomes must be embedded into the reals with dimension at least $n-1$.
We investigate ways to trade off surrogate loss dimension, the number of problem instances, and restricting the region of consistency in the simplex.
We show that full-dimensional subsets of the simplex exist around each point mass distribution for which consistency holds, but also, with less than $n-1$ dimensions, there exist distributions for which a phenomenon called hallucination occurs.
arXiv Detail & Related papers (2024-02-16T16:42:09Z) - Learning linear dynamical systems under convex constraints [4.4351901934764975]
We consider the problem of identification of linear dynamical systems from $T$ samples of a single trajectory.
$A*$ can be reliably estimated for values $T$ smaller than what is needed for unconstrained setting.
arXiv Detail & Related papers (2023-03-27T11:49:40Z) - The Sample Complexity of Online Contract Design [120.9833763323407]
We study the hidden-action principal-agent problem in an online setting.
In each round, the principal posts a contract that specifies the payment to the agent based on each outcome.
The agent then makes a strategic choice of action that maximizes her own utility, but the action is not directly observable by the principal.
arXiv Detail & Related papers (2022-11-10T17:59:42Z) - Lessons from $O(N)$ models in one dimension [0.0]
Various topics related to the $O(N)$ model in one spacetime dimension (i.e. ordinary quantum mechanics) are considered.
The focus is on a pedagogical presentation of quantum field theory methods in a simpler context.
arXiv Detail & Related papers (2021-09-14T11:36:30Z) - Learning the optimal regularizer for inverse problems [1.763934678295407]
We consider the linear inverse problem $y=Ax+epsilon$, where $Acolon Xto Y$ is a known linear operator between the separable Hilbert spaces $X$ and $Y$.
This setting covers several inverse problems in imaging including denoising, deblurring, and X-ray tomography.
Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori but learned from data.
arXiv Detail & Related papers (2021-06-11T17:14:27Z) - Towards Defending Multiple $\ell_p$-norm Bounded Adversarial
Perturbations via Gated Batch Normalization [120.99395850108422]
Existing adversarial defenses typically improve model robustness against individual specific perturbations.
Some recent methods improve model robustness against adversarial attacks in multiple $ell_p$ balls, but their performance against each perturbation type is still far from satisfactory.
We propose Gated Batch Normalization (GBN) to adversarially train a perturbation-invariant predictor for defending multiple $ell_p bounded adversarial perturbations.
arXiv Detail & Related papers (2020-12-03T02:26:01Z) - Scattering data and bound states of a squeezed double-layer structure [77.34726150561087]
A structure composed of two parallel homogeneous layers is studied in the limit as their widths $l_j$ and $l_j$, and the distance between them $r$ shrinks to zero simultaneously.
The existence of non-trivial bound states is proven in the squeezing limit, including the particular example of the squeezed potential in the form of the derivative of Dirac's delta function.
The scenario how a single bound state survives in the squeezed system from a finite number of bound states in the finite system is described in detail.
arXiv Detail & Related papers (2020-11-23T14:40:27Z) - Improving Robustness and Generality of NLP Models Using Disentangled
Representations [62.08794500431367]
Supervised neural networks first map an input $x$ to a single representation $z$, and then map $z$ to the output label $y$.
We present methods to improve robustness and generality of NLP models from the standpoint of disentangled representation learning.
We show that models trained with the proposed criteria provide better robustness and domain adaptation ability in a wide range of supervised learning tasks.
arXiv Detail & Related papers (2020-09-21T02:48:46Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z) - Nearly Optimal Regret for Stochastic Linear Bandits with Heavy-Tailed
Payoffs [35.988644745703645]
We analyze the linear bandits with heavy-tailed payoffs, where the payoffs admit finite $1+epsilon$ moments.
We propose two novel algorithms which enjoy a sublinear regret bound of $widetildeO(dfrac12Tfrac11+epsilon)$.
arXiv Detail & Related papers (2020-04-28T13:01:38Z) - Toward Adversarial Robustness via Semi-supervised Robust Training [93.36310070269643]
Adrial examples have been shown to be the severe threat to deep neural networks (DNNs)
We propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_stand$ and $R_rob$)
arXiv Detail & Related papers (2020-03-16T02:14:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.