LPF-Defense: 3D Adversarial Defense based on Frequency Analysis
- URL: http://arxiv.org/abs/2202.11287v1
- Date: Wed, 23 Feb 2022 03:31:25 GMT
- Title: LPF-Defense: 3D Adversarial Defense based on Frequency Analysis
- Authors: Hanieh Naderi, Arian Etemadi, Kimia Noorbakhsh and Shohreh Kasaei
- Abstract summary: 3D point cloud classification is still very vulnerable to adversarial attacks.
More adversarial perturbations are found in the mid and high-frequency components of input data.
By suppressing the high-frequency content in the training phase, the models against adversarial examples is improved.
- Score: 11.496599300185915
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although 3D point cloud classification has recently been widely deployed in
different application scenarios, it is still very vulnerable to adversarial
attacks. This increases the importance of robust training of 3D models in the
face of adversarial attacks. Based on our analysis on the performance of
existing adversarial attacks, more adversarial perturbations are found in the
mid and high-frequency components of input data. Therefore, by suppressing the
high-frequency content in the training phase, the models robustness against
adversarial examples is improved. Experiments showed that the proposed defense
method decreases the success rate of six attacks on PointNet, PointNet++ ,, and
DGCNN models. In particular, improvements are achieved with an average increase
of classification accuracy by 3.8 % on drop100 attack and 4.26 % on drop200
attack compared to the state-of-the-art methods. The method also improves
models accuracy on the original dataset compared to other available methods.
Related papers
- Improving Adversarial Robustness for 3D Point Cloud Recognition at Test-Time through Purified Self-Training [9.072521170921712]
3D point cloud deep learning model is vulnerable to adversarial attacks.
adversarial purification employs generative model to mitigate the impact of adversarial attacks.
We propose a test-time purified self-training strategy to achieve this objective.
arXiv Detail & Related papers (2024-09-23T11:46:38Z) - Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - Adaptive Modeling Against Adversarial Attacks [1.90365714903665]
Adversarial training, the process of training a deep learning model with adversarial data, is one of the most successful adversarial defense methods for deep learning models.
We have found that the robustness to white-box attack of an adversarially trained model can be further improved if we fine tune this model in inference stage to adapt to the adversarial input.
arXiv Detail & Related papers (2021-12-23T09:52:30Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - On Adversarial Robustness of 3D Point Cloud Classification under
Adaptive Attacks [22.618325281125916]
3D point clouds play pivotal roles in various safety-critical applications, such as autonomous driving.
We perform the first security analysis of state-of-the-art defenses and design adaptive evaluations on them.
Our 100% adaptive attack success rates show that current countermeasures are still vulnerable.
arXiv Detail & Related papers (2020-11-24T06:46:38Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.