TreeGCN-ED: Encoding Point Cloud using a Tree-Structured Graph Network
- URL: http://arxiv.org/abs/2110.03170v2
- Date: Mon, 11 Oct 2021 07:40:33 GMT
- Title: TreeGCN-ED: Encoding Point Cloud using a Tree-Structured Graph Network
- Authors: Prajwal Singh, Kaustubh Sadekar, Shanmuganathan Raman
- Abstract summary: This work proposes an autoencoder based framework to generate robust embeddings for point clouds.
We demonstrate the applicability of the proposed framework in applications like: 3D point cloud completion and Single image based 3D reconstruction.
- Score: 24.299931323012757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point cloud is an efficient way of representing and storing 3D geometric
data. Deep learning algorithms on point clouds are time and memory efficient.
Several methods such as PointNet and FoldingNet have been proposed for
processing point clouds. This work proposes an autoencoder based framework to
generate robust embeddings for point clouds by utilizing hierarchical
information using graph convolution. We perform multiple experiments to assess
the quality of embeddings generated by the proposed encoder architecture and
visualize the t-SNE map to highlight its ability to distinguish between
different object classes. We further demonstrate the applicability of the
proposed framework in applications like: 3D point cloud completion and Single
image based 3D reconstruction.
Related papers
- Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - Explaining Deep Neural Networks for Point Clouds using Gradient-based
Visualisations [1.2891210250935146]
We propose a novel approach to generate coarse visual explanations of networks designed to classify unstructured 3D data.
Our method uses gradients flowing back to the final feature map layers and maps these values as contributions of the corresponding points in the input point cloud.
The generality of our approach is tested on various point cloud classification networks, including'single object' networks PointNet, PointNet++, DGCNN, and a'scene' network VoteNet.
arXiv Detail & Related papers (2022-07-26T15:42:08Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Voint Cloud: Multi-View Point Cloud Representation for 3D Understanding [80.04281842702294]
We introduce the concept of the multi-view point cloud (Voint cloud) representing each 3D point as a set of features extracted from several view-points.
This novel 3D Voint cloud representation combines the compactness of 3D point cloud representation with the natural view-awareness of multi-view representation.
We deploy a Voint neural network (VointNet) with a theoretically established functional form to learn representations in the Voint space.
arXiv Detail & Related papers (2021-11-30T13:08:19Z) - PnP-3D: A Plug-and-Play for 3D Point Clouds [38.05362492645094]
We propose a plug-and-play module, -3D, to improve the effectiveness of existing networks in analyzing point cloud data.
To thoroughly evaluate our approach, we conduct experiments on three standard point cloud analysis tasks.
In addition to achieving state-of-the-art results, we present comprehensive studies to demonstrate our approach's advantages.
arXiv Detail & Related papers (2021-08-16T23:59:43Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - Multi-scale Receptive Fields Graph Attention Network for Point Cloud
Classification [35.88116404702807]
The proposed MRFGAT architecture is tested on ModelNet10 and ModelNet40 datasets.
Results show it achieves state-of-the-art performance in shape classification tasks.
arXiv Detail & Related papers (2020-09-28T13:01:28Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z) - Learning to Segment 3D Point Clouds in 2D Image Space [20.119802932358333]
We show how to efficiently project 3D point clouds into a 2D image space.
Traditional 2D convolutional neural networks (CNNs) such as U-Net can be applied for segmentation.
arXiv Detail & Related papers (2020-03-12T03:18:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.