Boosting 3D Adversarial Attacks with Attacking On Frequency
- URL: http://arxiv.org/abs/2201.10937v1
- Date: Wed, 26 Jan 2022 13:52:17 GMT
- Title: Boosting 3D Adversarial Attacks with Attacking On Frequency
- Authors: Binbin Liu, Jinlai Zhang, Lyujie Chen, Jihong Zhu
- Abstract summary: We propose a novel point cloud attack (dubbed AOF) that pays more attention on the low-frequency component of point clouds.
Experiments validate that AOF can improve the transferability significantly compared to state-of-the-art (SOTA) attacks.
- Score: 6.577812580043734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) have been shown to be vulnerable to adversarial
attacks. Recently, 3D adversarial attacks, especially adversarial attacks on
point clouds, have elicited mounting interest. However, adversarial point
clouds obtained by previous methods show weak transferability and are easy to
defend. To address these problems, in this paper we propose a novel point cloud
attack (dubbed AOF) that pays more attention on the low-frequency component of
point clouds. We combine the losses from point cloud and its low-frequency
component to craft adversarial samples. Extensive experiments validate that AOF
can improve the transferability significantly compared to state-of-the-art
(SOTA) attacks, and is more robust to SOTA 3D defense methods. Otherwise,
compared to clean point clouds, adversarial point clouds obtained by AOF
contain more deformation than outlier.
Related papers
- Transferable 3D Adversarial Shape Completion using Diffusion Models [8.323647730916635]
3D point cloud feature learning has significantly improved the performance of 3D deep-learning models.
Existing attack methods primarily focus on white-box scenarios and struggle to transfer to recently proposed 3D deep-learning models.
In this paper, we generate high-quality adversarial point clouds using diffusion models.
Our proposed attacks outperform state-of-the-art adversarial attack methods against both black-box models and defenses.
arXiv Detail & Related papers (2024-07-14T04:51:32Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks
for Defending Adversarial Examples [25.029854308139853]
adversarial examples on 3D point clouds make them more challenging to defend against than those on 2D images.
In this paper, we first establish a comprehensive, and rigorous point cloud adversarial robustness benchmark.
We then perform extensive and systematic experiments to identify an effective combination of these tricks.
We construct a more robust defense framework achieving an average accuracy of 83.45% against various attacks.
arXiv Detail & Related papers (2023-07-31T01:34:24Z) - Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented
Reality [10.118505317224683]
Adversarial examples are beneficial to improve the robustness of the 3D neural network model.
Most 3D adversarial attack methods perturb the entire point cloud to generate adversarial examples.
We propose an adaptive local adversarial attack method (AL-Adv) on 3D point clouds to generate adversarial point clouds.
arXiv Detail & Related papers (2023-03-12T11:52:02Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - 3D Adversarial Attacks Beyond Point Cloud [8.076067288723133]
Previous adversarial attacks on 3D point clouds mainly focus on add perturbation to the original point cloud.
We present a novel adversarial attack, named Mesh Attack, to address this problem.
arXiv Detail & Related papers (2021-04-25T13:01:41Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.