Surrogate Model-Based Explainability Methods for Point Cloud NNs
- URL: http://arxiv.org/abs/2107.13459v1
- Date: Wed, 28 Jul 2021 16:13:20 GMT
- Title: Surrogate Model-Based Explainability Methods for Point Cloud NNs
- Authors: Hanxiao Tan, Helena Kotthaus
- Abstract summary: We propose new explainability approaches for point cloud deep neural networks.
Our approach provides a fairly accurate, more intuitive and widely applicable explanation for point cloud classification tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of autonomous driving and robotics, point clouds are showing
their excellent real-time performance as raw data from most of the mainstream
3D sensors. Therefore, point cloud neural networks have become a popular
research direction in recent years. So far, however, there has been little
discussion about the explainability of deep neural networks for point clouds.
In this paper, we propose new explainability approaches for point cloud deep
neural networks based on local surrogate model-based methods to show which
components make the main contribution to the classification. Moreover, we
propose a quantitative validation method for explainability methods of point
clouds which enhances the persuasive power of explainability by dropping the
most positive or negative contributing features and monitoring how the
classification scores of specific categories change. To enable an intuitive
explanation of misclassified instances, we display features with confounding
contributions. Our new explainability approach provides a fairly accurate, more
intuitive and widely applicable explanation for point cloud classification
tasks. Our code is available at https://github.com/Explain3D/Explainable3D
Related papers
- Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Explaining Deep Neural Networks for Point Clouds using Gradient-based
Visualisations [1.2891210250935146]
We propose a novel approach to generate coarse visual explanations of networks designed to classify unstructured 3D data.
Our method uses gradients flowing back to the final feature map layers and maps these values as contributions of the corresponding points in the input point cloud.
The generality of our approach is tested on various point cloud classification networks, including'single object' networks PointNet, PointNet++, DGCNN, and a'scene' network VoteNet.
arXiv Detail & Related papers (2022-07-26T15:42:08Z) - Visualizing Global Explanations of Point Cloud DNNs [0.0]
We propose a point cloud-applicable explainability approach based on a local surrogate model-based method to show which components contribute to the classification.
Our new explainability approach provides a fairly accurate, more semantically coherent and widely applicable explanation for point cloud classification tasks.
arXiv Detail & Related papers (2022-03-17T17:53:11Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Unsupervised Point Cloud Representation Learning with Deep Neural
Networks: A Survey [104.71816962689296]
Unsupervised point cloud representation learning has attracted increasing attention due to the constraint in large-scale point cloud labelling.
This paper provides a comprehensive review of unsupervised point cloud representation learning using deep neural networks.
arXiv Detail & Related papers (2022-02-28T07:46:05Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - Explainability-Aware One Point Attack for Point Cloud Neural Networks [0.0]
This work proposes two new attack methods: opa and cta, which go in the opposite direction.
We show that the popular point cloud networks can be deceived with almost 100% success rate by shifting only one point from the input instance.
We also show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks.
arXiv Detail & Related papers (2021-10-08T14:29:02Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z) - Airborne LiDAR Point Cloud Classification with Graph Attention
Convolution Neural Network [5.69168146446103]
We present a graph attention convolution neural network (GACNN) that can be directly applied to the classification of unstructured 3D point clouds obtained by airborne LiDAR.
Based on the proposed graph attention convolution module, we further design an end-to-end encoder-decoder network, named GACNN, to capture multiscale features of the point clouds.
Experiments on the ISPRS 3D labeling dataset show that the proposed model achieves a new state-of-the-art performance in terms of average F1 score (71.5%) and a satisfying overall accuracy (83.2%)
arXiv Detail & Related papers (2020-04-20T05:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.