Scale-Adv: A Joint Attack on Image-Scaling and Machine Learning
Classifiers
- URL: http://arxiv.org/abs/2104.08690v1
- Date: Sun, 18 Apr 2021 03:19:15 GMT
- Title: Scale-Adv: A Joint Attack on Image-Scaling and Machine Learning
Classifiers
- Authors: Yue Gao, Kassem Fawaz
- Abstract summary: We propose Scale-Adv, a novel attack framework that jointly targets the image-scaling and classification stages.
For scaling attacks, we show that Scale-Adv can evade four out of five state-of-the-art defenses by incorporating adversarial examples.
For classification, we show that Scale-Adv can significantly improve the performance of machine learning attacks by leveraging weaknesses in the scaling algorithm.
- Score: 22.72696699380479
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As real-world images come in varying sizes, the machine learning model is
part of a larger system that includes an upstream image scaling algorithm. In
this system, the model and the scaling algorithm have become attractive targets
for numerous attacks, such as adversarial examples and the recent image-scaling
attack. In response to these attacks, researchers have developed defense
approaches that are tailored to attacks at each processing stage. As these
defenses are developed in isolation, their underlying assumptions become
questionable when viewing them from the perspective of an end-to-end machine
learning system. In this paper, we investigate whether defenses against scaling
attacks and adversarial examples are still robust when an adversary targets the
entire machine learning system. In particular, we propose Scale-Adv, a novel
attack framework that jointly targets the image-scaling and classification
stages. This framework packs several novel techniques, including novel
representations of the scaling defenses. It also defines two integrations that
allow for attacking the machine learning system pipeline in the white-box and
black-box settings. Based on this framework, we evaluate cutting-edge defenses
at each processing stage. For scaling attacks, we show that Scale-Adv can evade
four out of five state-of-the-art defenses by incorporating adversarial
examples. For classification, we show that Scale-Adv can significantly improve
the performance of machine learning attacks by leveraging weaknesses in the
scaling algorithm. We empirically observe that Scale-Adv can produce
adversarial examples with less perturbation and higher confidence than vanilla
black-box and white-box attacks. We further demonstrate the transferability of
Scale-Adv on a commercial online API.
Related papers
- On the Detection of Image-Scaling Attacks in Machine Learning [11.103249083138213]
Image scaling is an integral part of machine learning and computer vision systems.
Image-scaling attacks modifying the entire scaled image can be reliably detected even under an adaptive adversary.
We show that our methods provide strong detection performance even if only minor parts of the image are manipulated.
arXiv Detail & Related papers (2023-10-23T16:46:28Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Btech thesis report on adversarial attack detection and purification of
adverserially attacked images [0.0]
This thesis report is on detection and purification of adverserially attacked images.
A deep learning model is trained on certain training examples for various tasks such as classification, regression etc.
arXiv Detail & Related papers (2022-05-09T09:24:11Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Meta Gradient Adversarial Attack [64.5070788261061]
This paper proposes a novel architecture called Metaversa Gradient Adrial Attack (MGAA), which is plug-and-play and can be integrated with any existing gradient-based attack method.
Specifically, we randomly sample multiple models from a model zoo to compose different tasks and iteratively simulate a white-box attack and a black-box attack in each task.
By narrowing the gap between the gradient directions in white-box and black-box attacks, the transferability of adversarial examples on the black-box setting can be improved.
arXiv Detail & Related papers (2021-08-09T17:44:19Z) - Black-box adversarial attacks using Evolution Strategies [3.093890460224435]
We study the generation of black-box adversarial attacks for image classification tasks.
Our results show that the attacked neural networks can be, in most cases, easily fooled by all the algorithms under comparison.
Some black-box optimization algorithms may be better in "harder" setups, both in terms of attack success rate and efficiency.
arXiv Detail & Related papers (2021-04-30T15:33:07Z) - Automating Defense Against Adversarial Attacks: Discovery of
Vulnerabilities and Application of Multi-INT Imagery to Protect Deployed
Models [0.0]
We evaluate the use of multi-spectral image arrays and ensemble learners to combat adversarial attacks.
In rough analogy to defending cyber-networks, we combine techniques from both offensive ("red team) and defensive ("blue team") approaches.
arXiv Detail & Related papers (2021-03-29T19:07:55Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Towards Class-Oriented Poisoning Attacks Against Neural Networks [1.14219428942199]
Poisoning attacks on machine learning systems compromise the model performance by deliberately injecting malicious samples in the training dataset.
We propose a class-oriented poisoning attack that is capable of forcing the corrupted model to predict in two specific ways.
To maximize the adversarial effect as well as reduce the computational complexity of poisoned data generation, we propose a gradient-based framework.
arXiv Detail & Related papers (2020-07-31T19:27:37Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.