Influencer Backdoor Attack on Semantic Segmentation
- URL: http://arxiv.org/abs/2303.12054v5
- Date: Wed, 17 Apr 2024 15:12:29 GMT
- Title: Influencer Backdoor Attack on Semantic Segmentation
- Authors: Haoheng Lan, Jindong Gu, Philip Torr, Hengshuang Zhao,
- Abstract summary: Influencer Backdoor Attack (IBA) is a backdoor attack on semantic segmentation models.
IBA is expected to maintain the classification accuracy of non-victim pixels and mislead classifications of all victim pixels in every single inference.
We introduce an innovative Pixel Random Labeling strategy which maintains optimal performance even when the trigger is placed far from the victim pixels.
- Score: 39.57965442338681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When a small number of poisoned samples are injected into the training dataset of a deep neural network, the network can be induced to exhibit malicious behavior during inferences, which poses potential threats to real-world applications. While they have been intensively studied in classification, backdoor attacks on semantic segmentation have been largely overlooked. Unlike classification, semantic segmentation aims to classify every pixel within a given image. In this work, we explore backdoor attacks on segmentation models to misclassify all pixels of a victim class by injecting a specific trigger on non-victim pixels during inferences, which is dubbed Influencer Backdoor Attack (IBA). IBA is expected to maintain the classification accuracy of non-victim pixels and mislead classifications of all victim pixels in every single inference and could be easily applied to real-world scenes. Based on the context aggregation ability of segmentation models, we proposed a simple, yet effective, Nearest-Neighbor trigger injection strategy. We also introduce an innovative Pixel Random Labeling strategy which maintains optimal performance even when the trigger is placed far from the victim pixels. Our extensive experiments reveal that current segmentation models do suffer from backdoor attacks, demonstrate IBA real-world applicability, and show that our proposed techniques can further increase attack performance.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Unsegment Anything by Simulating Deformation [67.10966838805132]
"Anything Unsegmentable" is a task to grant any image "the right to be unsegmented"
We aim to achieve transferable adversarial attacks against all prompt-based segmentation models.
Our approach focuses on disrupting image encoder features to achieve prompt-agnostic attacks.
arXiv Detail & Related papers (2024-04-03T09:09:42Z) - Uncertainty-weighted Loss Functions for Improved Adversarial Attacks on
Semantic Segmentation [16.109860499330562]
adversarial attacks developed for classification models were shown to be applicable to segmentation models as well.
We present simple uncertainty-based weighting schemes for the loss functions of such attacks.
The weighting schemes can be easily integrated into the loss function of a range of well-known adversarial attackers.
arXiv Detail & Related papers (2023-10-26T14:47:10Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and
Boosting Segmentation Robustness [63.726895965125145]
Deep neural network-based image classifications are vulnerable to adversarial perturbations.
In this work, we propose an effective and efficient segmentation attack method, dubbed SegPGD.
Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models.
arXiv Detail & Related papers (2022-07-25T17:56:54Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Towards Class-Oriented Poisoning Attacks Against Neural Networks [1.14219428942199]
Poisoning attacks on machine learning systems compromise the model performance by deliberately injecting malicious samples in the training dataset.
We propose a class-oriented poisoning attack that is capable of forcing the corrupted model to predict in two specific ways.
To maximize the adversarial effect as well as reduce the computational complexity of poisoned data generation, we propose a gradient-based framework.
arXiv Detail & Related papers (2020-07-31T19:27:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.