Local Aggressive Adversarial Attacks on 3D Point Cloud
- URL: http://arxiv.org/abs/2105.09090v1
- Date: Wed, 19 May 2021 12:22:56 GMT
- Title: Local Aggressive Adversarial Attacks on 3D Point Cloud
- Authors: Yiming Sun, Feng Chen, Zhiyu Chen, Mingjie Wang, Ruonan Li
- Abstract summary: Deep neural networks are prone to adversarial examples which could deliberately fool the model to make mistakes.
In this paper, we propose a local aggressive adversarial attacks (L3A) to solve above issues.
Experiments on PointNet, PointNet++ and DGCNN demonstrate the state-of-the-art performance of our method.
- Score: 12.121901103987712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are found to be prone to adversarial examples which
could deliberately fool the model to make mistakes. Recently, a few of works
expand this task from 2D image to 3D point cloud by using global point cloud
optimization. However, the perturbations of global point are not effective for
misleading the victim model. First, not all points are important in
optimization toward misleading. Abundant points account considerable distortion
budget but contribute trivially to attack. Second, the multi-label optimization
is suboptimal for adversarial attack, since it consumes extra energy in finding
multi-label victim model collapse and causes instance transformation to be
dissimilar to any particular instance. Third, the independent adversarial and
perceptibility losses, caring misclassification and dissimilarity separately,
treat the updating of each point equally without a focus. Therefore, once
perceptibility loss approaches its budget threshold, all points would be stock
in the surface of hypersphere and attack would be locked in local optimality.
Therefore, we propose a local aggressive adversarial attacks (L3A) to solve
above issues. Technically, we select a bunch of salient points, the high-score
subset of point cloud according to gradient, to perturb. Then a flow of
aggressive optimization strategies are developed to reinforce the unperceptive
generation of adversarial examples toward misleading victim models. Extensive
experiments on PointNet, PointNet++ and DGCNN demonstrate the state-of-the-art
performance of our method against existing adversarial attack methods.
Related papers
- Hard-Label Black-Box Attacks on 3D Point Clouds [66.52447238776482]
We introduce a novel 3D attack method based on a new spectrum-aware decision boundary algorithm to generate high-quality adversarial samples.
Experiments demonstrate that our attack competitively outperforms existing white/black-box attackers in terms of attack performance and adversary quality.
arXiv Detail & Related papers (2024-11-30T09:05:02Z) - Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented
Reality [10.118505317224683]
Adversarial examples are beneficial to improve the robustness of the 3D neural network model.
Most 3D adversarial attack methods perturb the entire point cloud to generate adversarial examples.
We propose an adaptive local adversarial attack method (AL-Adv) on 3D point clouds to generate adversarial point clouds.
arXiv Detail & Related papers (2023-03-12T11:52:02Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.