PIG-Net: Inception based Deep Learning Architecture for 3D Point Cloud
Segmentation
- URL: http://arxiv.org/abs/2101.11987v1
- Date: Thu, 28 Jan 2021 13:27:55 GMT
- Title: PIG-Net: Inception based Deep Learning Architecture for 3D Point Cloud
Segmentation
- Authors: Sindhu Hegde and Shankar Gangisetty
- Abstract summary: We propose a inception based deep network architecture called PIG-Net, that effectively characterizes the local and global geometric details of the point clouds.
We perform an exhaustive experimental analysis of the PIG-Net architecture on two state-of-the-art datasets.
- Score: 0.9137554315375922
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Point clouds, being the simple and compact representation of surface geometry
of 3D objects, have gained increasing popularity with the evolution of deep
learning networks for classification and segmentation tasks. Unlike human,
teaching the machine to analyze the segments of an object is a challenging task
and quite essential in various machine vision applications. In this paper, we
address the problem of segmentation and labelling of the 3D point clouds by
proposing a inception based deep network architecture called PIG-Net, that
effectively characterizes the local and global geometric details of the point
clouds. In PIG-Net, the local features are extracted from the transformed input
points using the proposed inception layers and then aligned by feature
transform. These local features are aggregated using the global average pooling
layer to obtain the global features. Finally, feed the concatenated local and
global features to the convolution layers for segmenting the 3D point clouds.
We perform an exhaustive experimental analysis of the PIG-Net architecture on
two state-of-the-art datasets, namely, ShapeNet [1] and PartNet [2]. We
evaluate the effectiveness of our network by performing ablation study.
Related papers
- Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers [59.0181939916084]
Traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries.
We propose a novel Priors Distillation (RPD) method to extract priors from the well-trained transformers on massive images.
Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification.
arXiv Detail & Related papers (2024-07-26T06:29:09Z) - PointeNet: A Lightweight Framework for Effective and Efficient Point
Cloud Analysis [28.54939134635978]
PointeNet is a network designed specifically for point cloud analysis.
Our method demonstrates flexibility by seamlessly integrating with a classification/segmentation head or embedding into off-the-shelf 3D object detection networks.
Experiments on object-level datasets, including ModelNet40, ScanObjectNN, ShapeNet KITTI, and the scene-level dataset KITTI, demonstrate the superior performance of PointeNet over state-of-the-art methods in point cloud analysis.
arXiv Detail & Related papers (2023-12-20T03:34:48Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - PointResNet: Residual Network for 3D Point Cloud Segmentation and
Classification [18.466814193413487]
Point cloud segmentation and classification are some of the primary tasks in 3D computer vision.
In this paper, we propose PointResNet, a residual block-based approach.
Our model directly processes the 3D points, using a deep neural network for the segmentation and classification tasks.
arXiv Detail & Related papers (2022-11-20T17:39:48Z) - Learning point embedding for 3D data processing [2.12121796606941]
Current point-based methods are essentially spatial relationship processing networks.
Our architecture, PE-Net, learns the representation of point clouds in high-dimensional space.
Experiments show that PE-Net achieves the state-of-the-art performance in multiple challenging datasets.
arXiv Detail & Related papers (2021-07-19T00:25:28Z) - FatNet: A Feature-attentive Network for 3D Point Cloud Processing [1.502579291513768]
We introduce a novel feature-attentive neural network layer, a FAT layer, that combines both global point-based features and local edge-based features in order to generate better embeddings.
Our architecture achieves state-of-the-art results on the task of point cloud classification, as demonstrated on the ModelNet40 dataset.
arXiv Detail & Related papers (2021-04-07T23:13:56Z) - Learning Geometry-Disentangled Representation for Complementary
Understanding of 3D Object Point Cloud [50.56461318879761]
We propose Geometry-Disentangled Attention Network (GDANet) for 3D image processing.
GDANet disentangles point clouds into contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components.
Experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters.
arXiv Detail & Related papers (2020-12-20T13:35:00Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z) - GFPNet: A Deep Network for Learning Shape Completion in Generic Fitted
Primitives [68.8204255655161]
We propose an object reconstruction apparatus that uses the so-called Generic Primitives (GP) to complete shapes.
We show that GFPNet competes with state of the art shape completion methods by providing performance results on the ModelNet and KITTI benchmarking datasets.
arXiv Detail & Related papers (2020-06-03T08:29:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.