Improving transferability of 3D adversarial attacks with scale and shear
transformations
- URL: http://arxiv.org/abs/2211.01093v1
- Date: Wed, 2 Nov 2022 13:09:38 GMT
- Title: Improving transferability of 3D adversarial attacks with scale and shear
transformations
- Authors: Jinali Zhang, Yinpeng Dong, Jun Zhu, Jihong Zhu, Minchi Kuang, Xiaming
Yuan
- Abstract summary: This paper proposes Scale and Shear (SS) Attack to generate 3D adversarial examples with strong transferability.
Specifically, we randomly scale or shear the input point cloud, so that the attack will not overfit the white-box model.
Experiments show that the SS attack can be seamlessly combined with the existing state-of-the-art (SOTA) 3D point cloud attack methods.
- Score: 34.07511992559102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous work has shown that 3D point cloud classifiers can be vulnerable to
adversarial examples. However, most of the existing methods are aimed at
white-box attacks, where the parameters and other information of the
classifiers are known in the attack, which is unrealistic for real-world
applications. In order to improve the attack performance of the black-box
classifiers, the research community generally uses the transfer-based black-box
attack. However, the transferability of current 3D attacks is still relatively
low. To this end, this paper proposes Scale and Shear (SS) Attack to generate
3D adversarial examples with strong transferability. Specifically, we randomly
scale or shear the input point cloud, so that the attack will not overfit the
white-box model, thereby improving the transferability of the attack. Extensive
experiments show that the SS attack proposed in this paper can be seamlessly
combined with the existing state-of-the-art (SOTA) 3D point cloud attack
methods to form more powerful attack methods, and the SS attack improves the
transferability over 3.6 times compare to the baseline. Moreover, while
substantially outperforming the baseline methods, the SS attack achieves SOTA
transferability under various defenses. Our code will be available online at
https://github.com/cuge1995/SS-attack
Related papers
- Poison-splat: Computation Cost Attack on 3D Gaussian Splatting [90.88713193520917]
We reveal a significant security vulnerability that has been largely overlooked in 3DGS.
The adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training.
Such a computation cost attack is achieved by addressing a bi-level optimization problem.
arXiv Detail & Related papers (2024-10-10T17:57:29Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Query-Based Adversarial Prompt Generation [67.238873588125]
We build adversarial examples that cause an aligned language model to emit harmful strings.
We validate our attack on GPT-3.5 and OpenAI's safety classifier.
arXiv Detail & Related papers (2024-02-19T18:01:36Z) - Investigating Top-$k$ White-Box and Transferable Black-box Attack [75.13902066331356]
We show that stronger attack actually transfers better for the general top-$k$ ASR indicated by the interest class rank (ICR) after attack.
We propose a new normalized CE loss that guides the logit to be updated in the direction of implicitly maximizing its rank distance from the ground-truth class.
arXiv Detail & Related papers (2022-03-30T15:02:27Z) - Art-Attack: Black-Box Adversarial Attack via Evolutionary Art [5.760976250387322]
Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples.
This paper proposes a gradient-free attack by using a concept of evolutionary art to generate adversarial examples.
arXiv Detail & Related papers (2022-03-07T12:54:09Z) - Boosting 3D Adversarial Attacks with Attacking On Frequency [6.577812580043734]
We propose a novel point cloud attack (dubbed AOF) that pays more attention on the low-frequency component of point clouds.
Experiments validate that AOF can improve the transferability significantly compared to state-of-the-art (SOTA) attacks.
arXiv Detail & Related papers (2022-01-26T13:52:17Z) - Imperceptible Transfer Attack and Defense on 3D Point Cloud
Classification [12.587561231609083]
We study 3D point cloud attacks from two new and challenging perspectives.
We develop an adversarial transformation model to generate the most harmful distortions and enforce the adversarial examples to resist it.
We train more robust black-box 3D models to defend against such ITA attacks by learning more discriminative point cloud representations.
arXiv Detail & Related papers (2021-11-22T05:07:36Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - Meta Gradient Adversarial Attack [64.5070788261061]
This paper proposes a novel architecture called Metaversa Gradient Adrial Attack (MGAA), which is plug-and-play and can be integrated with any existing gradient-based attack method.
Specifically, we randomly sample multiple models from a model zoo to compose different tasks and iteratively simulate a white-box attack and a black-box attack in each task.
By narrowing the gap between the gradient directions in white-box and black-box attacks, the transferability of adversarial examples on the black-box setting can be improved.
arXiv Detail & Related papers (2021-08-09T17:44:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.