Visualizing Global Explanations of Point Cloud DNNs
- URL: http://arxiv.org/abs/2203.09505v1
- Date: Thu, 17 Mar 2022 17:53:11 GMT
- Title: Visualizing Global Explanations of Point Cloud DNNs
- Authors: Hanxiao Tan
- Abstract summary: We propose a point cloud-applicable explainability approach based on a local surrogate model-based method to show which components contribute to the classification.
Our new explainability approach provides a fairly accurate, more semantically coherent and widely applicable explanation for point cloud classification tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of autonomous driving and robotics, point clouds are showing
their excellent real-time performance as raw data from most of the mainstream
3D sensors. Therefore, point cloud neural networks have become a popular
research direction in recent years. So far, however, there has been little
discussion about the explainability of deep neural networks for point clouds.
In this paper, we propose a point cloud-applicable explainability approach
based on a local surrogate model-based method to show which components
contribute to the classification. Moreover, we propose quantitative fidelity
validations for generated explanations that enhance the persuasive power of
explainability and compare the plausibility of different existing point
cloud-applicable explainability methods. Our new explainability approach
provides a fairly accurate, more semantically coherent and widely applicable
explanation for point cloud classification tasks. Our code is available at
https://github.com/Explain3D/LIME-3D
Related papers
- Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Unsupervised Point Cloud Representation Learning with Deep Neural
Networks: A Survey [104.71816962689296]
Unsupervised point cloud representation learning has attracted increasing attention due to the constraint in large-scale point cloud labelling.
This paper provides a comprehensive review of unsupervised point cloud representation learning using deep neural networks.
arXiv Detail & Related papers (2022-02-28T07:46:05Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - Explainability-Aware One Point Attack for Point Cloud Neural Networks [0.0]
This work proposes two new attack methods: opa and cta, which go in the opposite direction.
We show that the popular point cloud networks can be deceived with almost 100% success rate by shifting only one point from the input instance.
We also show the interesting impact of different point attribution distributions on the adversarial robustness of point cloud networks.
arXiv Detail & Related papers (2021-10-08T14:29:02Z) - PnP-3D: A Plug-and-Play for 3D Point Clouds [38.05362492645094]
We propose a plug-and-play module, -3D, to improve the effectiveness of existing networks in analyzing point cloud data.
To thoroughly evaluate our approach, we conduct experiments on three standard point cloud analysis tasks.
In addition to achieving state-of-the-art results, we present comprehensive studies to demonstrate our approach's advantages.
arXiv Detail & Related papers (2021-08-16T23:59:43Z) - Surrogate Model-Based Explainability Methods for Point Cloud NNs [0.0]
We propose new explainability approaches for point cloud deep neural networks.
Our approach provides a fairly accurate, more intuitive and widely applicable explanation for point cloud classification tasks.
arXiv Detail & Related papers (2021-07-28T16:13:20Z) - Graphite: GRAPH-Induced feaTure Extraction for Point Cloud Registration [80.69255347486693]
We introduce a GRAPH-Induced feaTure Extraction pipeline, a simple yet powerful feature and keypoint detector.
We construct a generic graph-based learning scheme to describe point cloud regions and extract salient points.
We Reformulate the 3D keypoint pipeline with graph neural networks which allow efficient processing of the point set.
arXiv Detail & Related papers (2020-10-18T19:41:09Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z) - Airborne LiDAR Point Cloud Classification with Graph Attention
Convolution Neural Network [5.69168146446103]
We present a graph attention convolution neural network (GACNN) that can be directly applied to the classification of unstructured 3D point clouds obtained by airborne LiDAR.
Based on the proposed graph attention convolution module, we further design an end-to-end encoder-decoder network, named GACNN, to capture multiscale features of the point clouds.
Experiments on the ISPRS 3D labeling dataset show that the proposed model achieves a new state-of-the-art performance in terms of average F1 score (71.5%) and a satisfying overall accuracy (83.2%)
arXiv Detail & Related papers (2020-04-20T05:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.