Generating Unrestricted 3D Adversarial Point Clouds
- URL: http://arxiv.org/abs/2111.08973v2
- Date: Fri, 19 Nov 2021 04:24:42 GMT
- Title: Generating Unrestricted 3D Adversarial Point Clouds
- Authors: Xuelong Dai, Yanjie Li, Hua Dai, Bin Xiao
- Abstract summary: deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
- Score: 9.685291478330054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Utilizing 3D point cloud data has become an urgent need for the deployment of
artificial intelligence in many areas like facial recognition and self-driving.
However, deep learning for 3D point clouds is still vulnerable to adversarial
attacks, e.g., iterative attacks, point transformation attacks, and generative
attacks. These attacks need to restrict perturbations of adversarial examples
within a strict bound, leading to the unrealistic adversarial 3D point clouds.
In this paper, we propose an Adversarial Graph-Convolutional Generative
Adversarial Network (AdvGCGAN) to generate visually realistic adversarial 3D
point clouds from scratch. Specifically, we use a graph convolutional generator
and a discriminator with an auxiliary classifier to generate realistic point
clouds, which learn the latent distribution from the real 3D data. The
unrestricted adversarial attack loss is incorporated in the special adversarial
training of GAN, which enables the generator to generate the adversarial
examples to spoof the target network. Compared with the existing state-of-art
attack methods, the experiment results demonstrate the effectiveness of our
unrestricted adversarial attack methods with a higher attack success rate and
visual quality. Additionally, the proposed AdvGCGAN can achieve better
performance against defense models and better transferability than existing
attack methods with strong camouflage.
Related papers
- Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - 3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D
Point Cloud Attack [64.83391236611409]
We propose a novel 3D attack method to generate adversarial samples solely with the knowledge of class labels.
Even in the challenging hard-label setting, 3DHacker still competitively outperforms existing 3D attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2023-08-15T03:29:31Z) - Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented
Reality [10.118505317224683]
Adversarial examples are beneficial to improve the robustness of the 3D neural network model.
Most 3D adversarial attack methods perturb the entire point cloud to generate adversarial examples.
We propose an adaptive local adversarial attack method (AL-Adv) on 3D point clouds to generate adversarial point clouds.
arXiv Detail & Related papers (2023-03-12T11:52:02Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D
Point Cloud Recognition [29.840946461846]
3D Point cloud is a critical data representation in many real-world applications like autonomous driving, robotics, and medical imaging.
Deep learning is notorious for its vulnerability to adversarial attacks.
We propose PointDP, a purification strategy that leverages diffusion models to defend against 3D adversarial attacks.
arXiv Detail & Related papers (2022-08-21T04:49:17Z) - Passive Defense Against 3D Adversarial Point Clouds Through the Lens of
3D Steganalysis [1.14219428942199]
A 3D adversarial point cloud detector is designed through the lens of 3D steganalysis.
To our knowledge, this work is the first to apply 3D steganalysis to 3D adversarial example defense.
arXiv Detail & Related papers (2022-05-18T06:19:15Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - Imperceptible Transfer Attack and Defense on 3D Point Cloud
Classification [12.587561231609083]
We study 3D point cloud attacks from two new and challenging perspectives.
We develop an adversarial transformation model to generate the most harmful distortions and enforce the adversarial examples to resist it.
We train more robust black-box 3D models to defend against such ITA attacks by learning more discriminative point cloud representations.
arXiv Detail & Related papers (2021-11-22T05:07:36Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.