Multiclass ASMA vs Targeted PGD Attack in Image Segmentation
- URL: http://arxiv.org/abs/2208.01844v1
- Date: Wed, 3 Aug 2022 05:05:30 GMT
- Title: Multiclass ASMA vs Targeted PGD Attack in Image Segmentation
- Authors: Johnson Vo (1), Jiabao Xie (1), and Sahil Patel (1) ((1) University of
Toronto)
- Abstract summary: This paper explores the projected gradient descent (PGD) attack and the Adaptive Mask Attack (ASMA) on the image segmentation DeepLabV3 model.
The existence of such attack however puts all of image classification deep learning networks in danger of exploitation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning networks have demonstrated high performance in a large variety
of applications, such as image classification, speech recognition, and natural
language processing. However, there exists a major vulnerability exploited by
the use of adversarial attacks. An adversarial attack imputes images by
altering the input image very slightly, making it nearly undetectable to the
naked eye, but results in a very different classification by the network. This
paper explores the projected gradient descent (PGD) attack and the Adaptive
Mask Segmentation Attack (ASMA) on the image segmentation DeepLabV3 model using
two types of architectures: MobileNetV3 and ResNet50, It was found that PGD was
very consistent in changing the segmentation to be its target while the
generalization of ASMA to a multiclass target was not as effective. The
existence of such attack however puts all of image classification deep learning
networks in danger of exploitation.
Related papers
- Unsegment Anything by Simulating Deformation [67.10966838805132]
"Anything Unsegmentable" is a task to grant any image "the right to be unsegmented"
We aim to achieve transferable adversarial attacks against all prompt-based segmentation models.
Our approach focuses on disrupting image encoder features to achieve prompt-agnostic attacks.
arXiv Detail & Related papers (2024-04-03T09:09:42Z) - Adversarial Attacks on Image Classification Models: Analysis and Defense [0.0]
adversarial attacks on image classification models based on convolutional neural networks (CNN)
Fast gradient sign method (FGSM) is explored and its adverse effects on the performances of image classification models are examined.
mechanism is proposed to defend against the FGSM attack based on a modified defensive distillation-based approach.
arXiv Detail & Related papers (2023-12-28T08:08:23Z) - Extreme Image Transformations Facilitate Robust Latent Object
Representations [1.2277343096128712]
Adversarial attacks can affect the object recognition capabilities of machines in wild.
These can often result from spurious correlations between input and class labels, and are prone to memorization in large networks.
In this work, we show that fine-tuning any pretrained off-the-shelf network with Extreme Image Transformations (EIT) not only helps in learning a robust latent representation, it also improves the performance of these networks against common adversarial attacks of various intensities.
arXiv Detail & Related papers (2023-09-19T21:31:25Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Adversarial Attacks on Image Classification Models: FGSM and Patch
Attacks and their Impact [0.0]
This chapter introduces the concept of adversarial attacks on image classification models built on convolutional neural networks (CNN)
CNNs are very popular deep-learning models which are used in image classification tasks.
Two very well-known adversarial attacks are discussed and their impact on the performance of image classifiers is analyzed.
arXiv Detail & Related papers (2023-07-05T06:40:08Z) - Leveraging Local Patch Differences in Multi-Object Scenes for Generative
Adversarial Attacks [48.66027897216473]
We tackle a more practical problem of generating adversarial perturbations using multi-object (i.e., multiple dominant objects) images.
We propose a novel generative attack (called Local Patch Difference or LPD-Attack) where a novel contrastive loss function uses the aforesaid local differences in feature space of multi-object scenes.
Our approach outperforms baseline generative attacks with highly transferable perturbations when evaluated under different white-box and black-box settings.
arXiv Detail & Related papers (2022-09-20T17:36:32Z) - GAMA: Generative Adversarial Multi-Object Scene Attacks [48.33120361498787]
This paper presents the first approach of using generative models for adversarial attacks on multi-object scenes.
We call this attack approach Generative Adversarial Multi-object scene Attacks (GAMA)
arXiv Detail & Related papers (2022-09-20T06:40:54Z) - AF$_2$: Adaptive Focus Framework for Aerial Imagery Segmentation [86.44683367028914]
Aerial imagery segmentation has some unique challenges, the most critical one among which lies in foreground-background imbalance.
We propose Adaptive Focus Framework (AF$), which adopts a hierarchical segmentation procedure and focuses on adaptively utilizing multi-scale representations.
AF$ has significantly improved the accuracy on three widely used aerial benchmarks, as fast as the mainstream method.
arXiv Detail & Related papers (2022-02-18T10:14:45Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Learning One Class Representations for Face Presentation Attack
Detection using Multi-channel Convolutional Neural Networks [7.665392786787577]
presentation attack detection (PAD) methods often fail in generalizing to unseen attacks.
We propose a new framework for PAD using a one-class classifier, where the representation used is learned with a Multi-Channel Convolutional Neural Network (MCCNN)
A novel loss function is introduced, which forces the network to learn a compact embedding for bonafide class while being far from the representation of attacks.
The proposed framework introduces a novel approach to learn a robust PAD system from bonafide and available (known) attack classes.
arXiv Detail & Related papers (2020-07-22T14:19:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.