Imperceptible and Robust Backdoor Attack in 3D Point Cloud
- URL: http://arxiv.org/abs/2208.08052v1
- Date: Wed, 17 Aug 2022 03:53:10 GMT
- Title: Imperceptible and Robust Backdoor Attack in 3D Point Cloud
- Authors: Kuofeng Gao, Jiawang Bai, Baoyuan Wu, Mengxi Ya, Shu-Tao Xia
- Abstract summary: We propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge.
We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations.
Experiments on three benchmark datasets and four models show that IRBA achieves 80%+ ASR in most cases even with pre-processing techniques.
- Score: 62.992167285646275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the thriving of deep learning in processing point cloud data, recent
works show that backdoor attacks pose a severe security threat to 3D vision
applications. The attacker injects the backdoor into the 3D model by poisoning
a few training samples with trigger, such that the backdoored model performs
well on clean samples but behaves maliciously when the trigger pattern appears.
Existing attacks often insert some additional points into the point cloud as
the trigger, or utilize a linear transformation (e.g., rotation) to construct
the poisoned point cloud. However, the effects of these poisoned samples are
likely to be weakened or even eliminated by some commonly used pre-processing
techniques for 3D point cloud, e.g., outlier removal or rotation augmentation.
In this paper, we propose a novel imperceptible and robust backdoor attack
(IRBA) to tackle this challenge. We utilize a nonlinear and local
transformation, called weighted local transformation (WLT), to construct
poisoned samples with unique transformations. As there are several
hyper-parameters and randomness in WLT, it is difficult to produce two similar
transformations. Consequently, poisoned samples with unique transformations are
likely to be resistant to aforementioned pre-processing techniques. Besides, as
the controllability and smoothness of the distortion caused by a fixed WLT, the
generated poisoned samples are also imperceptible to human inspection.
Extensive experiments on three benchmark datasets and four models show that
IRBA achieves 80%+ ASR in most cases even with pre-processing techniques, which
is significantly higher than previous state-of-the-art attacks.
Related papers
- Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - An Invisible Backdoor Attack Based On Semantic Feature [0.0]
Backdoor attacks have severely threatened deep neural network (DNN) models in the past several years.
We propose a novel backdoor attack, making imperceptible changes.
We evaluate our attack on three prominent image classification datasets.
arXiv Detail & Related papers (2024-05-19T13:50:40Z) - iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself [5.007492246056274]
MirrorAttack is a novel effective 3D backdoor attack method.
It implants the trigger by simply reconstructing a clean point cloud with an auto-encoder.
We achieve state-of-the-art ASR on different types of victim models with the intervention of defensive techniques.
arXiv Detail & Related papers (2024-03-09T09:15:37Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - SATBA: An Invisible Backdoor Attack Based On Spatial Attention [7.405457329942725]
Backdoor attacks involve the training of Deep Neural Network (DNN) on datasets that contain hidden trigger patterns.
Most existing backdoor attacks suffer from two significant drawbacks: their trigger patterns are visible and easy to detect by backdoor defense or even human inspection.
We propose a novel backdoor attack named SATBA that overcomes these limitations using spatial attention and an U-net based model.
arXiv Detail & Related papers (2023-02-25T10:57:41Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation [48.238349062995916]
We find that highly effective backdoors can be easily inserted using rotation-based image transformation.
Our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks.
arXiv Detail & Related papers (2022-07-22T00:21:18Z) - 3D Point Cloud Completion with Geometric-Aware Adversarial Augmentation [11.198650616143219]
We show that training with adversarial samples can improve the performance of neural networks on 3D point cloud completion tasks.
We propose a novel approach to generate adversarial samples that benefit both the performance of clean and adversarial samples.
Experimental results show that training with the adversarial samples crafted by our method effectively enhances the performance of PCN on the ShapeNet dataset.
arXiv Detail & Related papers (2021-09-21T13:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.