Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds
- URL: http://arxiv.org/abs/2403.05247v1
- Date: Fri, 8 Mar 2024 12:08:06 GMT
- Title: Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds
- Authors: Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan
He, Xiaochun Cao
- Abstract summary: Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
- Score: 62.94859179323329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attack methods based on point manipulation for 3D point cloud
classification have revealed the fragility of 3D models, yet the adversarial
examples they produce are easily perceived or defended against. The trade-off
between the imperceptibility and adversarial strength leads most point attack
methods to inevitably introduce easily detectable outlier points upon a
successful attack. Another promising strategy, shape-based attack, can
effectively eliminate outliers, but existing methods often suffer significant
reductions in imperceptibility due to irrational deformations. We find that
concealing deformation perturbations in areas insensitive to human eyes can
achieve a better trade-off between imperceptibility and adversarial strength,
specifically in parts of the object surface that are complex and exhibit
drastic curvature changes. Therefore, we propose a novel shape-based
adversarial attack method, HiT-ADV, which initially conducts a two-stage search
for attack regions based on saliency and imperceptibility scores, and then adds
deformation perturbations in each attack region using Gaussian kernel
functions. Additionally, HiT-ADV is extendable to physical attack. We propose
that by employing benign resampling and benign rigid transformations, we can
further enhance physical adversarial strength with little sacrifice to
imperceptibility. Extensive experiments have validated the superiority of our
method in terms of adversarial and imperceptible properties in both digital and
physical spaces. Our code is avaliable at: https://github.com/TRLou/HiT-ADV.
Related papers
- Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented
Reality [10.118505317224683]
Adversarial examples are beneficial to improve the robustness of the 3D neural network model.
Most 3D adversarial attack methods perturb the entire point cloud to generate adversarial examples.
We propose an adaptive local adversarial attack method (AL-Adv) on 3D point clouds to generate adversarial point clouds.
arXiv Detail & Related papers (2023-03-12T11:52:02Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Imperceptible and Robust Backdoor Attack in 3D Point Cloud [62.992167285646275]
We propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge.
We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations.
Experiments on three benchmark datasets and four models show that IRBA achieves 80%+ ASR in most cases even with pre-processing techniques.
arXiv Detail & Related papers (2022-08-17T03:53:10Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Imperceptible Transfer Attack and Defense on 3D Point Cloud
Classification [12.587561231609083]
We study 3D point cloud attacks from two new and challenging perspectives.
We develop an adversarial transformation model to generate the most harmful distortions and enforce the adversarial examples to resist it.
We train more robust black-box 3D models to defend against such ITA attacks by learning more discriminative point cloud representations.
arXiv Detail & Related papers (2021-11-22T05:07:36Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Local Aggressive Adversarial Attacks on 3D Point Cloud [12.121901103987712]
Deep neural networks are prone to adversarial examples which could deliberately fool the model to make mistakes.
In this paper, we propose a local aggressive adversarial attacks (L3A) to solve above issues.
Experiments on PointNet, PointNet++ and DGCNN demonstrate the state-of-the-art performance of our method.
arXiv Detail & Related papers (2021-05-19T12:22:56Z) - Adversarial Feature Desensitization [12.401175943131268]
We propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field.
Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs.
arXiv Detail & Related papers (2020-06-08T14:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.