PCLD: Point Cloud Layerwise Diffusion for Adversarial Purification
- URL: http://arxiv.org/abs/2403.06698v1
- Date: Mon, 11 Mar 2024 13:13:10 GMT
- Title: PCLD: Point Cloud Layerwise Diffusion for Adversarial Purification
- Authors: Mert Gulsen, Batuhan Cengiz, Yusuf H. Sahin, Gozde Unal
- Abstract summary: Point clouds are extensively employed in a variety of real-world applications such as robotics, autonomous driving and augmented reality.
A typical way to assess a model's robustness is through adversarial attacks.
We propose Point Cloud Layerwise Diffusion (PCLD), a layerwise diffusion based 3D point cloud defense strategy.
- Score: 0.8192907805418583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point clouds are extensively employed in a variety of real-world applications
such as robotics, autonomous driving and augmented reality. Despite the recent
success of point cloud neural networks, especially for safety-critical tasks,
it is essential to also ensure the robustness of the model. A typical way to
assess a model's robustness is through adversarial attacks, where test-time
examples are generated based on gradients to deceive the model. While many
different defense mechanisms are studied in 2D, studies on 3D point clouds have
been relatively limited in the academic field. Inspired from PointDP, which
denoises the network inputs by diffusion, we propose Point Cloud Layerwise
Diffusion (PCLD), a layerwise diffusion based 3D point cloud defense strategy.
Unlike PointDP, we propagated the diffusion denoising after each layer to
incrementally enhance the results. We apply our defense method to different
types of commonly used point cloud models and adversarial attacks to evaluate
its robustness. Our experiments demonstrate that the proposed defense method
achieved results that are comparable to or surpass those of existing
methodologies, establishing robustness through a novel technique. Code is
available at https://github.com/batuceng/diffusion-layer-robustness-pc.
Related papers
- Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - PointDP: Diffusion-driven Purification against Adversarial Attacks on 3D
Point Cloud Recognition [29.840946461846]
3D Point cloud is a critical data representation in many real-world applications like autonomous driving, robotics, and medical imaging.
Deep learning is notorious for its vulnerability to adversarial attacks.
We propose PointDP, a purification strategy that leverages diffusion models to defend against 3D adversarial attacks.
arXiv Detail & Related papers (2022-08-21T04:49:17Z) - On Adversarial Robustness of Point Cloud Semantic Segmentation [16.89469632840972]
PCSS has been applied in many safety-critical applications like autonomous driving.
This study shows how PCSS models are affected under adversarial samples.
We call the attention of the research community to develop new approaches to harden PCSS models.
arXiv Detail & Related papers (2021-12-11T00:10:00Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - 3D Point Cloud Completion with Geometric-Aware Adversarial Augmentation [11.198650616143219]
We show that training with adversarial samples can improve the performance of neural networks on 3D point cloud completion tasks.
We propose a novel approach to generate adversarial samples that benefit both the performance of clean and adversarial samples.
Experimental results show that training with the adversarial samples crafted by our method effectively enhances the performance of PCN on the ShapeNet dataset.
arXiv Detail & Related papers (2021-09-21T13:16:46Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - Minimal Adversarial Examples for Deep Learning on 3D Point Clouds [25.569519066857705]
In this work, we explore adversarial attacks for point cloud-based neural networks.
We propose a unified formulation for adversarial point cloud generation that can generalise two different attack strategies.
Our method achieves the state-of-the-art performance with higher than 89% and 90% of attack success rate on synthetic and real-world data respectively.
arXiv Detail & Related papers (2020-08-27T11:50:45Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.