On the Detection of Image-Scaling Attacks in Machine Learning
- URL: http://arxiv.org/abs/2310.15085v1
- Date: Mon, 23 Oct 2023 16:46:28 GMT
- Title: On the Detection of Image-Scaling Attacks in Machine Learning
- Authors: Erwin Quiring and Andreas M\"uller and Konrad Rieck
- Abstract summary: Image scaling is an integral part of machine learning and computer vision systems.
Image-scaling attacks modifying the entire scaled image can be reliably detected even under an adaptive adversary.
We show that our methods provide strong detection performance even if only minor parts of the image are manipulated.
- Score: 11.103249083138213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image scaling is an integral part of machine learning and computer vision
systems. Unfortunately, this preprocessing step is vulnerable to so-called
image-scaling attacks where an attacker makes unnoticeable changes to an image
so that it becomes a new image after scaling. This opens up new ways for
attackers to control the prediction or to improve poisoning and backdoor
attacks. While effective techniques exist to prevent scaling attacks, their
detection has not been rigorously studied yet. Consequently, it is currently
not possible to reliably spot these attacks in practice.
This paper presents the first in-depth systematization and analysis of
detection methods for image-scaling attacks. We identify two general detection
paradigms and derive novel methods from them that are simple in design yet
significantly outperform previous work. We demonstrate the efficacy of these
methods in a comprehensive evaluation with all major learning platforms and
scaling algorithms. First, we show that image-scaling attacks modifying the
entire scaled image can be reliably detected even under an adaptive adversary.
Second, we find that our methods provide strong detection performance even if
only minor parts of the image are manipulated. As a result, we can introduce a
novel protection layer against image-scaling attacks.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation [48.238349062995916]
We find that highly effective backdoors can be easily inserted using rotation-based image transformation.
Our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks.
arXiv Detail & Related papers (2022-07-22T00:21:18Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Backdoor Attacks on Self-Supervised Learning [22.24046752858929]
We show that self-supervised learning methods are vulnerable to backdoor attacks.
An attacker poisons a part of the unlabeled data by adding a small trigger (known to the attacker) to the images.
We propose a knowledge distillation based defense algorithm that succeeds in neutralizing the attack.
arXiv Detail & Related papers (2021-05-21T04:22:05Z) - Scale-Adv: A Joint Attack on Image-Scaling and Machine Learning
Classifiers [22.72696699380479]
We propose Scale-Adv, a novel attack framework that jointly targets the image-scaling and classification stages.
For scaling attacks, we show that Scale-Adv can evade four out of five state-of-the-art defenses by incorporating adversarial examples.
For classification, we show that Scale-Adv can significantly improve the performance of machine learning attacks by leveraging weaknesses in the scaling algorithm.
arXiv Detail & Related papers (2021-04-18T03:19:15Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Decamouflage: A Framework to Detect Image-Scaling Attacks on
Convolutional Neural Networks [35.30705616146299]
Image scaling functions could be adversarially abused to perform an attack called image-scaling attack.
This work presents an image-scaling attack detection framework, termed as Decamouflage.
Decamouflage consists of three independent detection methods: (1) rescaling, (2) filtering/pooling, and (3) steganalysis.
arXiv Detail & Related papers (2020-10-08T02:30:55Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [15.807243762876901]
We propose a novel strategy for hiding backdoor and poisoning attacks.
Our approach builds on a recent class of attacks against image scaling.
We show that backdoors and poisoning work equally well when combined with image-scaling attacks.
arXiv Detail & Related papers (2020-03-19T08:59:50Z) - Detecting Patch Adversarial Attacks with Image Residuals [9.169947558498535]
A discriminator is trained to distinguish between clean and adversarial samples.
We show that the obtained residuals act as a digital fingerprint for adversarial attacks.
Results show that the proposed detection method generalizes to previously unseen, stronger attacks.
arXiv Detail & Related papers (2020-02-28T01:28:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.