No-Box Attacks on 3D Point Cloud Classification
- URL: http://arxiv.org/abs/2210.14164v3
- Date: Sat, 27 Jan 2024 19:12:15 GMT
- Title: No-Box Attacks on 3D Point Cloud Classification
- Authors: Hanieh Naderi, Chinthaka Dinesh, Ivan V. Bajic and Shohreh Kasaei
- Abstract summary: Adversarial attacks pose serious challenges for deep neural network (DNN)-based analysis of various input signals.
This paper defines 14 point cloud features to examine whether these features can be used for adversarial point prediction.
Experiments show that a suitable combination of features is able to predict adversarial points of four different networks.
- Score: 35.55129060534018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks pose serious challenges for deep neural network
(DNN)-based analysis of various input signals. In the case of 3D point clouds,
methods have been developed to identify points that play a key role in network
decision, and these become crucial in generating existing adversarial attacks.
For example, a saliency map approach is a popular method for identifying
adversarial drop points, whose removal would significantly impact the network
decision. Generally, methods for identifying adversarial points rely on the
access to the DNN model itself to determine which points are critically
important for the model's decision. This paper aims to provide a novel
viewpoint on this problem, where adversarial points can be predicted without
access to the target DNN model, which is referred to as a ``no-box'' attack. To
this end, we define 14 point cloud features and use multiple linear regression
to examine whether these features can be used for adversarial point prediction,
and which combination of features is best suited for this purpose. Experiments
show that a suitable combination of features is able to predict adversarial
points of four different networks -- PointNet, PointNet++, DGCNN, and PointConv
-- significantly better than a random guess and comparable to white-box
attacks. Additionally, we show that no-box attack is transferable to unseen
models. The results also provide further insight into DNNs for point cloud
classification, by showing which features play key roles in their
decision-making process.
Related papers
- PCV: A Point Cloud-Based Network Verifier [8.239631885389382]
We describe a point cloud-based network verifier that successfully deals state of the art 3D PointNet.
We calculate the impact on model accuracy versus property factor and can test PointNet network's robustness against a small collection of perturbing input states.
arXiv Detail & Related papers (2023-01-27T15:58:54Z) - General Adversarial Defense Against Black-box Attacks via Pixel Level
and Feature Level Distribution Alignments [75.58342268895564]
We use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap.
The trained DGNs align the distribution of adversarial samples with clean ones for the target DNNs by translating pixel values.
Our strategy demonstrates its unique effectiveness and generality against black-box attacks.
arXiv Detail & Related papers (2022-12-11T01:51:31Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - Explainability-Aware One Point Attack for Point Cloud Neural Networks [0.0]
This work proposes two new attack methods: opa and cta, which go in the opposite direction.
We show that the popular point cloud networks can be deceived with almost 100% success rate by shifting only one point from the input instance.
We also show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks.
arXiv Detail & Related papers (2021-10-08T14:29:02Z) - Local Aggressive Adversarial Attacks on 3D Point Cloud [12.121901103987712]
Deep neural networks are prone to adversarial examples which could deliberately fool the model to make mistakes.
In this paper, we propose a local aggressive adversarial attacks (L3A) to solve above issues.
Experiments on PointNet, PointNet++ and DGCNN demonstrate the state-of-the-art performance of our method.
arXiv Detail & Related papers (2021-05-19T12:22:56Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of
Point Cloud-based Deep Networks [123.5839352227726]
This paper proposes a novel label guided adversarial network (LG-GAN) for real-time flexible targeted point cloud attack.
To the best of our knowledge, this is the first generation based 3D point cloud attack method.
arXiv Detail & Related papers (2020-11-01T17:17:10Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.