No-Box Attacks on 3D Point Cloud Classification
- URL: http://arxiv.org/abs/2210.14164v3
- Date: Sat, 27 Jan 2024 19:12:15 GMT
- Title: No-Box Attacks on 3D Point Cloud Classification
- Authors: Hanieh Naderi, Chinthaka Dinesh, Ivan V. Bajic and Shohreh Kasaei
- Abstract summary: Adversarial attacks pose serious challenges for deep neural network (DNN)-based analysis of various input signals.
This paper defines 14 point cloud features to examine whether these features can be used for adversarial point prediction.
Experiments show that a suitable combination of features is able to predict adversarial points of four different networks.
- Score: 35.55129060534018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks pose serious challenges for deep neural network
(DNN)-based analysis of various input signals. In the case of 3D point clouds,
methods have been developed to identify points that play a key role in network
decision, and these become crucial in generating existing adversarial attacks.
For example, a saliency map approach is a popular method for identifying
adversarial drop points, whose removal would significantly impact the network
decision. Generally, methods for identifying adversarial points rely on the
access to the DNN model itself to determine which points are critically
important for the model's decision. This paper aims to provide a novel
viewpoint on this problem, where adversarial points can be predicted without
access to the target DNN model, which is referred to as a ``no-box'' attack. To
this end, we define 14 point cloud features and use multiple linear regression
to examine whether these features can be used for adversarial point prediction,
and which combination of features is best suited for this purpose. Experiments
show that a suitable combination of features is able to predict adversarial
points of four different networks -- PointNet, PointNet++, DGCNN, and PointConv
-- significantly better than a random guess and comparable to white-box
attacks. Additionally, we show that no-box attack is transferable to unseen
models. The results also provide further insight into DNNs for point cloud
classification, by showing which features play key roles in their
decision-making process.
Related papers
- Risk-optimized Outlier Removal for Robust 3D Point Cloud Classification [54.286437930350445]
This paper highlights the challenges of point cloud classification posed by various forms of noise.
We introduce an innovative point outlier cleansing method that harnesses the power of downstream classification models.
Our proposed technique not only robustly filters diverse point cloud outliers but also consistently and significantly enhances existing robust methods for point cloud classification.
arXiv Detail & Related papers (2023-07-20T13:47:30Z) - PCV: A Point Cloud-Based Network Verifier [8.239631885389382]
We describe a point cloud-based network verifier that successfully deals state of the art 3D PointNet.
We calculate the impact on model accuracy versus property factor and can test PointNet network's robustness against a small collection of perturbing input states.
arXiv Detail & Related papers (2023-01-27T15:58:54Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - Explainability-Aware One Point Attack for Point Cloud Neural Networks [0.0]
This work proposes two new attack methods: opa and cta, which go in the opposite direction.
We show that the popular point cloud networks can be deceived with almost 100% success rate by shifting only one point from the input instance.
We also show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks.
arXiv Detail & Related papers (2021-10-08T14:29:02Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of
Point Cloud-based Deep Networks [123.5839352227726]
This paper proposes a novel label guided adversarial network (LG-GAN) for real-time flexible targeted point cloud attack.
To the best of our knowledge, this is the first generation based 3D point cloud attack method.
arXiv Detail & Related papers (2020-11-01T17:17:10Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - Minimal Adversarial Examples for Deep Learning on 3D Point Clouds [25.569519066857705]
In this work, we explore adversarial attacks for point cloud-based neural networks.
We propose a unified formulation for adversarial point cloud generation that can generalise two different attack strategies.
Our method achieves the state-of-the-art performance with higher than 89% and 90% of attack success rate on synthetic and real-world data respectively.
arXiv Detail & Related papers (2020-08-27T11:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.