MirrorAttack: Backdoor Attack on 3D Point Cloud with a Distorting Mirror
- URL: http://arxiv.org/abs/2403.05847v1
- Date: Sat, 9 Mar 2024 09:15:37 GMT
- Title: MirrorAttack: Backdoor Attack on 3D Point Cloud with a Distorting Mirror
- Authors: Yuhao Bian, Shengjing Tian, Xiuping Liu
- Abstract summary: MirrorAttack is a novel effective 3D backdoor attack method.
It implants the trigger by simply reconstructing a clean point cloud with an auto-encoder.
We achieve state-of-the-art ASR on different types of victim models with the intervention of defensive techniques.
- Score: 5.627919459380763
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The widespread deployment of Deep Neural Networks (DNNs) for 3D point cloud
processing starkly contrasts with their susceptibility to security breaches,
notably backdoor attacks. These attacks hijack DNNs during training, embedding
triggers in the data that, once activated, cause the network to make
predetermined errors while maintaining normal performance on unaltered data.
This vulnerability poses significant risks, especially given the insufficient
research on robust defense mechanisms for 3D point cloud networks against such
sophisticated threats. Existing attacks either struggle to resist basic point
cloud pre-processing methods, or rely on delicate manual design. Exploring
simple, effective, imperceptible, and difficult-to-defend triggers in 3D point
clouds is still challenging.To address these challenges, we introduce
MirrorAttack, a novel effective 3D backdoor attack method, which implants the
trigger by simply reconstructing a clean point cloud with an auto-encoder. The
data-driven nature of the MirrorAttack obviates the need for complex manual
design. Minimizing the reconstruction loss automatically improves
imperceptibility. Simultaneously, the reconstruction network endows the trigger
with pronounced nonlinearity and sample specificity, rendering traditional
preprocessing techniques ineffective in eliminating it. A trigger smoothing
module based on spherical harmonic transformation is also attached to regulate
the intensity of the attack.Both quantitive and qualitative results verify the
effectiveness of our method. We achieve state-of-the-art ASR on different types
of victim models with the intervention of defensive techniques. Moreover, the
minimal perturbation introduced by our trigger, as assessed by various metrics,
attests to the method's stealth, ensuring its imperceptibility.
Related papers
- Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - Toward Availability Attacks in 3D Point Clouds [28.496421433836908]
We show that extending 2D availability attacks directly to 3D point clouds under distance regularization is susceptible to the degeneracy.
We propose a novel Feature Collision Error-Minimization (FC-EM) method, which creates additional shortcuts in the feature space.
Experiments on typical point cloud datasets, 3D intracranial aneurysm medical dataset, and 3D face dataset verify the superiority and practicality of our approach.
arXiv Detail & Related papers (2024-06-26T08:13:30Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - Risk-optimized Outlier Removal for Robust 3D Point Cloud Classification [54.286437930350445]
This paper highlights the challenges of point cloud classification posed by various forms of noise.
We introduce an innovative point outlier cleansing method that harnesses the power of downstream classification models.
Our proposed technique not only robustly filters diverse point cloud outliers but also consistently and significantly enhances existing robust methods for point cloud classification.
arXiv Detail & Related papers (2023-07-20T13:47:30Z) - Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented
Reality [10.118505317224683]
Adversarial examples are beneficial to improve the robustness of the 3D neural network model.
Most 3D adversarial attack methods perturb the entire point cloud to generate adversarial examples.
We propose an adaptive local adversarial attack method (AL-Adv) on 3D point clouds to generate adversarial point clouds.
arXiv Detail & Related papers (2023-03-12T11:52:02Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - Imperceptible and Robust Backdoor Attack in 3D Point Cloud [62.992167285646275]
We propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge.
We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations.
Experiments on three benchmark datasets and four models show that IRBA achieves 80%+ ASR in most cases even with pre-processing techniques.
arXiv Detail & Related papers (2022-08-17T03:53:10Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.