Imperceptible Transfer Attack and Defense on 3D Point Cloud
Classification
- URL: http://arxiv.org/abs/2111.10990v1
- Date: Mon, 22 Nov 2021 05:07:36 GMT
- Title: Imperceptible Transfer Attack and Defense on 3D Point Cloud
Classification
- Authors: Daizong Liu, Wei Hu
- Abstract summary: We study 3D point cloud attacks from two new and challenging perspectives.
We develop an adversarial transformation model to generate the most harmful distortions and enforce the adversarial examples to resist it.
We train more robust black-box 3D models to defend against such ITA attacks by learning more discriminative point cloud representations.
- Score: 12.587561231609083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although many efforts have been made into attack and defense on the 2D image
domain in recent years, few methods explore the vulnerability of 3D models.
Existing 3D attackers generally perform point-wise perturbation over point
clouds, resulting in deformed structures or outliers, which is easily
perceivable by humans. Moreover, their adversarial examples are generated under
the white-box setting, which frequently suffers from low success rates when
transferred to attack remote black-box models. In this paper, we study 3D point
cloud attacks from two new and challenging perspectives by proposing a novel
Imperceptible Transfer Attack (ITA): 1) Imperceptibility: we constrain the
perturbation direction of each point along its normal vector of the
neighborhood surface, leading to generated examples with similar geometric
properties and thus enhancing the imperceptibility. 2) Transferability: we
develop an adversarial transformation model to generate the most harmful
distortions and enforce the adversarial examples to resist it, improving their
transferability to unknown black-box models. Further, we propose to train more
robust black-box 3D models to defend against such ITA attacks by learning more
discriminative point cloud representations. Extensive evaluations demonstrate
that our ITA attack is more imperceptible and transferable than
state-of-the-arts and validate the superiority of our defense strategy.
Related papers
- Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack [6.101494710781259]
We introduce a unified framework for conducting adversarial attacks within the context of 3D object tracking.
In addressing black-box attack scenarios, we introduce a novel transfer-based approach, the Target-aware Perturbation Generation (TAPG) algorithm.
Our experimental findings reveal a significant vulnerability in advanced tracking methods when subjected to both black-box and white-box attacks.
arXiv Detail & Related papers (2024-10-28T10:20:38Z) - Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - Improving transferability of 3D adversarial attacks with scale and shear
transformations [34.07511992559102]
This paper proposes Scale and Shear (SS) Attack to generate 3D adversarial examples with strong transferability.
Specifically, we randomly scale or shear the input point cloud, so that the attack will not overfit the white-box model.
Experiments show that the SS attack can be seamlessly combined with the existing state-of-the-art (SOTA) 3D point cloud attack methods.
arXiv Detail & Related papers (2022-11-02T13:09:38Z) - Imperceptible and Robust Backdoor Attack in 3D Point Cloud [62.992167285646275]
We propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge.
We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations.
Experiments on three benchmark datasets and four models show that IRBA achieves 80%+ ASR in most cases even with pre-processing techniques.
arXiv Detail & Related papers (2022-08-17T03:53:10Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z) - Spatiotemporal Attacks for Embodied Agents [119.43832001301041]
We take the first step to study adversarial attacks for embodied agents.
In particular, we generate adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions.
Our perturbations have strong attack and generalization abilities.
arXiv Detail & Related papers (2020-05-19T01:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.