Explainability-Aware One Point Attack for Point Cloud Neural Networks
- URL: http://arxiv.org/abs/2110.04158v1
- Date: Fri, 8 Oct 2021 14:29:02 GMT
- Title: Explainability-Aware One Point Attack for Point Cloud Neural Networks
- Authors: Hanxiao Tan and Helena Kotthaus
- Abstract summary: This work proposes two new attack methods: opa and cta, which go in the opposite direction.
We show that the popular point cloud networks can be deceived with almost 100% success rate by shifting only one point from the input instance.
We also show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the proposition of neural networks for point clouds, deep learning has
started to shine in the field of 3D object recognition while researchers have
shown an increased interest to investigate the reliability of point cloud
networks by fooling them with perturbed instances. However, most studies focus
on the imperceptibility or surface consistency, with humans perceiving no
perturbations on the adversarial examples. This work proposes two new attack
methods: opa and cta, which go in the opposite direction: we restrict the
perturbation dimensions to a human cognizable range with the help of
explainability methods, which enables the working principle or decision
boundary of the models to be comprehensible through the observable perturbation
magnitude. Our results show that the popular point cloud networks can be
deceived with almost 100% success rate by shifting only one point from the
input instance. In addition, we attempt to provide a more persuasive viewpoint
of comparing the robustness of point cloud models against adversarial attacks.
We also show the interesting impact of different point attribution
distributions on the adversarial robustness of point cloud networks. Finally,
we discuss how our approaches facilitate the explainability study for point
cloud networks. To the best of our knowledge, this is the first
point-cloud-based adversarial approach concerning explainability. Our code is
available at https://github.com/Explain3D/Exp-One-Point-Atk-PC.
Related papers
- Point2Vec for Self-Supervised Representation Learning on Point Clouds [66.53955515020053]
We extend data2vec to the point cloud domain and report encouraging results on several downstream tasks.
We propose point2vec, which unleashes the full potential of data2vec-like pre-training on point clouds.
arXiv Detail & Related papers (2023-03-29T10:08:29Z) - PointCaM: Cut-and-Mix for Open-Set Point Cloud Learning [72.07350827773442]
We propose to solve open-set point cloud learning using a novel Point Cut-and-Mix mechanism.
We use the Unknown-Point Simulator to simulate out-of-distribution data in the training stage.
The Unknown-Point Estimator module learns to exploit the point cloud's feature context for discriminating the known and unknown data.
arXiv Detail & Related papers (2022-12-05T03:53:51Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Visualizing Global Explanations of Point Cloud DNNs [0.0]
We propose a point cloud-applicable explainability approach based on a local surrogate model-based method to show which components contribute to the classification.
Our new explainability approach provides a fairly accurate, more semantically coherent and widely applicable explanation for point cloud classification tasks.
arXiv Detail & Related papers (2022-03-17T17:53:11Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - Unsupervised Point Cloud Representation Learning with Deep Neural
Networks: A Survey [104.71816962689296]
Unsupervised point cloud representation learning has attracted increasing attention due to the constraint in large-scale point cloud labelling.
This paper provides a comprehensive review of unsupervised point cloud representation learning using deep neural networks.
arXiv Detail & Related papers (2022-02-28T07:46:05Z) - Surrogate Model-Based Explainability Methods for Point Cloud NNs [0.0]
We propose new explainability approaches for point cloud deep neural networks.
Our approach provides a fairly accurate, more intuitive and widely applicable explanation for point cloud classification tasks.
arXiv Detail & Related papers (2021-07-28T16:13:20Z) - Minimal Adversarial Examples for Deep Learning on 3D Point Clouds [25.569519066857705]
In this work, we explore adversarial attacks for point cloud-based neural networks.
We propose a unified formulation for adversarial point cloud generation that can generalise two different attack strategies.
Our method achieves the state-of-the-art performance with higher than 89% and 90% of attack success rate on synthetic and real-world data respectively.
arXiv Detail & Related papers (2020-08-27T11:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.