PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D
Point Cloud Recognition
- URL: http://arxiv.org/abs/2208.09801v1
- Date: Sun, 21 Aug 2022 04:49:17 GMT
- Title: PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D
Point Cloud Recognition
- Authors: Jiachen Sun, Weili Nie, Zhiding Yu, Z. Morley Mao, and Chaowei Xiao
- Abstract summary: 3D Point cloud is a critical data representation in many real-world applications like autonomous driving, robotics, and medical imaging.
Deep learning is notorious for its vulnerability to adversarial attacks.
We propose PointDP, a purification strategy that leverages diffusion models to defend against 3D adversarial attacks.
- Score: 29.840946461846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Point cloud is becoming a critical data representation in many real-world
applications like autonomous driving, robotics, and medical imaging. Although
the success of deep learning further accelerates the adoption of 3D point
clouds in the physical world, deep learning is notorious for its vulnerability
to adversarial attacks. In this work, we first identify that the
state-of-the-art empirical defense, adversarial training, has a major
limitation in applying to 3D point cloud models due to gradient obfuscation. We
further propose PointDP, a purification strategy that leverages diffusion
models to defend against 3D adversarial attacks. We extensively evaluate
PointDP on six representative 3D point cloud architectures, and leverage 10+
strong and adaptive attacks to demonstrate its lower-bound robustness. Our
evaluation shows that PointDP achieves significantly better robustness than
state-of-the-art purification methods under strong attacks. Results of
certified defenses on randomized smoothing combined with PointDP will be
included in the near future.
Related papers
- Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - PCLD: Point Cloud Layerwise Diffusion for Adversarial Purification [0.8192907805418583]
Point clouds are extensively employed in a variety of real-world applications such as robotics, autonomous driving and augmented reality.
A typical way to assess a model's robustness is through adversarial attacks.
We propose Point Cloud Layerwise Diffusion (PCLD), a layerwise diffusion based 3D point cloud defense strategy.
arXiv Detail & Related papers (2024-03-11T13:13:10Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - 3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D
Point Cloud Attack [64.83391236611409]
We propose a novel 3D attack method to generate adversarial samples solely with the knowledge of class labels.
Even in the challenging hard-label setting, 3DHacker still competitively outperforms existing 3D attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2023-08-15T03:29:31Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - PointBA: Towards Backdoor Attacks in 3D Point Cloud [31.210502946247498]
We present the backdoor attacks in 3D with a unified framework that exploits the unique properties of 3D data and networks.
Our proposed backdoor attack in 3D point cloud is expected to perform as a baseline for improving the robustness of 3D deep models.
arXiv Detail & Related papers (2021-03-30T04:49:25Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.