On Adversarial Robustness of 3D Point Cloud Classification under
Adaptive Attacks
- URL: http://arxiv.org/abs/2011.11922v2
- Date: Tue, 6 Apr 2021 18:36:44 GMT
- Title: On Adversarial Robustness of 3D Point Cloud Classification under
Adaptive Attacks
- Authors: Jiachen Sun, Karl Koenig, Yulong Cao, Qi Alfred Chen, Z. Morley Mao
- Abstract summary: 3D point clouds play pivotal roles in various safety-critical applications, such as autonomous driving.
We perform the first security analysis of state-of-the-art defenses and design adaptive evaluations on them.
Our 100% adaptive attack success rates show that current countermeasures are still vulnerable.
- Score: 22.618325281125916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D point clouds play pivotal roles in various safety-critical applications,
such as autonomous driving, which desires the underlying deep neural networks
to be robust to adversarial perturbations. Though a few defenses against
adversarial point cloud classification have been proposed, it remains unknown
whether they are truly robust to adaptive attacks. To this end, we perform the
first security analysis of state-of-the-art defenses and design adaptive
evaluations on them. Our 100% adaptive attack success rates show that current
countermeasures are still vulnerable. Since adversarial training (AT) is
believed as the most robust defense, we present the first in-depth study
showing how AT behaves in point cloud classification and identify that the
required symmetric function (pooling operation) is paramount to the 3D model's
robustness under AT. Through our systematic analysis, we find that the
default-used fixed pooling (e.g., MAX pooling) generally weakens AT's
effectiveness in point cloud classification. Interestingly, we further discover
that sorting-based parametric pooling can significantly improve the models'
robustness. Based on above insights, we propose DeepSym, a deep symmetric
pooling operation, to architecturally advance the robustness to 47.0% under AT
without sacrificing nominal accuracy, outperforming the original design and a
strong baseline by 28.5% ($\sim 2.6 \times$) and 6.5%, respectively, in
PointNet.
Related papers
- Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data [12.641656743760874]
We propose a gradient attack that overcomes the failures of existing gradient attacks with adaptive mechanisms.
We also design CAA, an efficient evasion attack that combines our CAPGD attack and MOEVA, the best search-based attack.
Our empirical study demonstrates that CAA outperforms all existing attacks in 17 over the 20 settings.
arXiv Detail & Related papers (2024-06-02T15:26:52Z) - Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - Enhancing Robust Representation in Adversarial Training: Alignment and
Exclusion Criteria [61.048842737581865]
We show that Adversarial Training (AT) omits to learning robust features, resulting in poor performance of adversarial robustness.
We propose a generic framework of AT to gain robust representation, by the asymmetric negative contrast and reverse attention.
Empirical evaluations on three benchmark datasets show our methods greatly advance the robustness of AT and achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-10-05T07:29:29Z) - Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented
Reality [10.118505317224683]
Adversarial examples are beneficial to improve the robustness of the 3D neural network model.
Most 3D adversarial attack methods perturb the entire point cloud to generate adversarial examples.
We propose an adaptive local adversarial attack method (AL-Adv) on 3D point clouds to generate adversarial point clouds.
arXiv Detail & Related papers (2023-03-12T11:52:02Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - LPF-Defense: 3D Adversarial Defense based on Frequency Analysis [11.496599300185915]
3D point cloud classification is still very vulnerable to adversarial attacks.
More adversarial perturbations are found in the mid and high-frequency components of input data.
By suppressing the high-frequency content in the training phase, the models against adversarial examples is improved.
arXiv Detail & Related papers (2022-02-23T03:31:25Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.