Proximal Splitting Adversarial Attacks for Semantic Segmentation
- URL: http://arxiv.org/abs/2206.07179v2
- Date: Fri, 31 Mar 2023 20:28:56 GMT
- Title: Proximal Splitting Adversarial Attacks for Semantic Segmentation
- Authors: J\'er\^ome Rony, Jean-Christophe Pesquet, Ismail Ben Ayed
- Abstract summary: We show that a whitebox attack can fool adversarial segmentation models based on proximal Lagrangian norms.
Our attack significantly outperforms previously proposed ones, as well as classification attacks that we adapted for segmentation.
- Score: 33.53113858999438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classification has been the focal point of research on adversarial attacks,
but only a few works investigate methods suited to denser prediction tasks,
such as semantic segmentation. The methods proposed in these works do not
accurately solve the adversarial segmentation problem and, therefore,
overestimate the size of the perturbations required to fool models. Here, we
propose a white-box attack for these models based on a proximal splitting to
produce adversarial perturbations with much smaller $\ell_\infty$ norms. Our
attack can handle large numbers of constraints within a nonconvex minimization
framework via an Augmented Lagrangian approach, coupled with adaptive
constraint scaling and masking strategies. We demonstrate that our attack
significantly outperforms previously proposed ones, as well as classification
attacks that we adapted for segmentation, providing a first comprehensive
benchmark for this dense task.
Related papers
- Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - JMA: a General Algorithm to Craft Nearly Optimal Targeted Adversarial
Example [24.953032059932525]
We propose a more general, theoretically sound, targeted attack that resorts to the minimization of a Jacobian-induced MAhalanobis distance term.
The proposed algorithm provides an optimal solution to a linearized version of the adversarial example problem originally introduced by Szegedy et al.
arXiv Detail & Related papers (2024-01-02T13:03:29Z) - Balancing Act: Constraining Disparate Impact in Sparse Models [20.058720715290434]
We propose a constrained optimization approach that directly addresses the disparate impact of pruning.
Our formulation bounds the accuracy change between the dense and sparse models, for each sub-group.
Experimental results demonstrate that our technique scales reliably to problems involving large models and hundreds of protected sub-groups.
arXiv Detail & Related papers (2023-10-31T17:37:35Z) - On Evaluating the Adversarial Robustness of Semantic Segmentation Models [0.0]
A number of adversarial training approaches have been proposed as a defense against adversarial perturbation.
We show for the first time that a number of models in previous work that are claimed to be robust are in fact not robust at all.
We then evaluate simple adversarial training algorithms that produce reasonably robust models even under our set of strong attacks.
arXiv Detail & Related papers (2023-06-25T11:45:08Z) - SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and
Boosting Segmentation Robustness [63.726895965125145]
Deep neural network-based image classifications are vulnerable to adversarial perturbations.
In this work, we propose an effective and efficient segmentation attack method, dubbed SegPGD.
Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models.
arXiv Detail & Related papers (2022-07-25T17:56:54Z) - Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete
Sequential Data via Bayesian Optimization [10.246596695310176]
We focus on the problem of adversarial attacks against models on discrete sequential data in the black-box setting.
We propose a query-efficient black-box attack using Bayesian optimization, which dynamically computes important positions.
We develop a post-optimization algorithm that finds adversarial examples with smaller perturbation size.
arXiv Detail & Related papers (2022-06-17T06:11:36Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.