Defense-PointNet: Protecting PointNet Against Adversarial Attacks
- URL: http://arxiv.org/abs/2002.11881v1
- Date: Thu, 27 Feb 2020 02:35:08 GMT
- Title: Defense-PointNet: Protecting PointNet Against Adversarial Attacks
- Authors: Yu Zhang, Gongbo Liang, Tawfiq Salem, Nathan Jacobs
- Abstract summary: We propose Defense-PointNet to minimize the vulnerability of PointNet to adversarial attacks.
We show that Defense-PointNet significantly improves the robustness of the network against adversarial samples.
- Score: 19.814951750594574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite remarkable performance across a broad range of tasks, neural networks
have been shown to be vulnerable to adversarial attacks. Many works focus on
adversarial attacks and defenses on 2D images, but few focus on 3D point
clouds. In this paper, our goal is to enhance the adversarial robustness of
PointNet, which is one of the most widely used models for 3D point clouds. We
apply the fast gradient sign attack method (FGSM) on 3D point clouds and find
that FGSM can be used to generate not only adversarial images but also
adversarial point clouds. To minimize the vulnerability of PointNet to
adversarial attacks, we propose Defense-PointNet. We compare our model with two
baseline approaches and show that Defense-PointNet significantly improves the
robustness of the network against adversarial samples.
Related papers
- Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks
for Defending Adversarial Examples [25.029854308139853]
adversarial examples on 3D point clouds make them more challenging to defend against than those on 2D images.
In this paper, we first establish a comprehensive, and rigorous point cloud adversarial robustness benchmark.
We then perform extensive and systematic experiments to identify an effective combination of these tricks.
We construct a more robust defense framework achieving an average accuracy of 83.45% against various attacks.
arXiv Detail & Related papers (2023-07-31T01:34:24Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D
Point Cloud Recognition [29.840946461846]
3D Point cloud is a critical data representation in many real-world applications like autonomous driving, robotics, and medical imaging.
Deep learning is notorious for its vulnerability to adversarial attacks.
We propose PointDP, a purification strategy that leverages diffusion models to defend against 3D adversarial attacks.
arXiv Detail & Related papers (2022-08-21T04:49:17Z) - Passive Defense Against 3D Adversarial Point Clouds Through the Lens of
3D Steganalysis [1.14219428942199]
A 3D adversarial point cloud detector is designed through the lens of 3D steganalysis.
To our knowledge, this work is the first to apply 3D steganalysis to 3D adversarial example defense.
arXiv Detail & Related papers (2022-05-18T06:19:15Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of
Point Cloud-based Deep Networks [123.5839352227726]
This paper proposes a novel label guided adversarial network (LG-GAN) for real-time flexible targeted point cloud attack.
To the best of our knowledge, this is the first generation based 3D point cloud attack method.
arXiv Detail & Related papers (2020-11-01T17:17:10Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.