GPr-Net: Geometric Prototypical Network for Point Cloud Few-Shot
Learning
- URL: http://arxiv.org/abs/2304.06007v1
- Date: Wed, 12 Apr 2023 17:32:18 GMT
- Title: GPr-Net: Geometric Prototypical Network for Point Cloud Few-Shot
Learning
- Authors: Tejas Anvekar, Dena Bazazian
- Abstract summary: GPr-Net is a lightweight and computationally efficient geometric network that captures the prototypical topology of point clouds.
We show that GPr-Net outperforms state-of-the-art methods in few-shot learning on point clouds.
- Score: 2.4366811507669115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the realm of 3D-computer vision applications, point cloud few-shot
learning plays a critical role. However, it poses an arduous challenge due to
the sparsity, irregularity, and unordered nature of the data. Current methods
rely on complex local geometric extraction techniques such as convolution,
graph, and attention mechanisms, along with extensive data-driven pre-training
tasks. These approaches contradict the fundamental goal of few-shot learning,
which is to facilitate efficient learning. To address this issue, we propose
GPr-Net (Geometric Prototypical Network), a lightweight and computationally
efficient geometric prototypical network that captures the intrinsic topology
of point clouds and achieves superior performance. Our proposed method, IGI++
(Intrinsic Geometry Interpreter++) employs vector-based hand-crafted intrinsic
geometry interpreters and Laplace vectors to extract and evaluate point cloud
morphology, resulting in improved representations for FSL (Few-Shot Learning).
Additionally, Laplace vectors enable the extraction of valuable features from
point clouds with fewer points. To tackle the distribution drift challenge in
few-shot metric learning, we leverage hyperbolic space and demonstrate that our
approach handles intra and inter-class variance better than existing point
cloud few-shot learning methods. Experimental results on the ModelNet40 dataset
show that GPr-Net outperforms state-of-the-art methods in few-shot learning on
point clouds, achieving utmost computational efficiency that is $170\times$
better than all existing works. The code is publicly available at
https://github.com/TejasAnvekar/GPr-Net.
Related papers
- PointeNet: A Lightweight Framework for Effective and Efficient Point
Cloud Analysis [28.54939134635978]
PointeNet is a network designed specifically for point cloud analysis.
Our method demonstrates flexibility by seamlessly integrating with a classification/segmentation head or embedding into off-the-shelf 3D object detection networks.
Experiments on object-level datasets, including ModelNet40, ScanObjectNN, ShapeNet KITTI, and the scene-level dataset KITTI, demonstrate the superior performance of PointeNet over state-of-the-art methods in point cloud analysis.
arXiv Detail & Related papers (2023-12-20T03:34:48Z) - Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - Dynamic Clustering Transformer Network for Point Cloud Segmentation [23.149220817575195]
We propose a novel 3D point cloud representation network, called Dynamic Clustering Transformer Network (DCTNet)
It has an encoder-decoder architecture, allowing for both local and global feature learning.
Our method was evaluated on an object-based dataset (ShapeNet), an urban navigation dataset (Toronto-3D), and a multispectral LiDAR dataset.
arXiv Detail & Related papers (2023-05-30T01:11:05Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - UPDesc: Unsupervised Point Descriptor Learning for Robust Registration [54.95201961399334]
UPDesc is an unsupervised method to learn point descriptors for robust point cloud registration.
We show that our learned descriptors yield superior performance over existing unsupervised methods.
arXiv Detail & Related papers (2021-08-05T17:11:08Z) - Point Cloud Registration using Representative Overlapping Points [10.843159482657303]
We propose ROPNet, a new deep learning model using Representative Overlapping Points with discriminative features for registration.
Specifically, we propose a context-guided module which uses an encoder to extract global features for predicting point overlap score.
Experiments over ModelNet40 using noisy and partially overlapping point clouds show that the proposed method outperforms traditional and learning-based methods.
arXiv Detail & Related papers (2021-07-06T12:52:22Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z) - PointShuffleNet: Learning Non-Euclidean Features with Homotopy
Equivalence and Mutual Information [9.920649045126188]
We propose a novel point cloud analysis neural network called PointShuffleNet (PSN), which shows great promise in point cloud classification and segmentation.
Our PSN achieves state-of-the-art results on ModelNet40, ShapeNet and S3DIS with high efficiency.
arXiv Detail & Related papers (2021-03-31T03:01:16Z) - Learning Rotation-Invariant Representations of Point Clouds Using
Aligned Edge Convolutional Neural Networks [29.3830445533532]
Point cloud analysis is an area of increasing interest due to the development of 3D sensors that are able to rapidly measure the depth of scenes accurately.
Applying deep learning techniques to perform point cloud analysis is non-trivial due to the inability of these methods to generalize to unseen rotations.
To address this limitation, one usually has to augment the training data, which can lead to extra computation and require larger model complexity.
This paper proposes a new neural network called the Aligned Edge Convolutional Neural Network (AECNN) that learns a feature representation of point clouds relative to Local Reference Frames (LRFs)
arXiv Detail & Related papers (2021-01-02T17:36:00Z) - Local Grid Rendering Networks for 3D Object Detection in Point Clouds [98.02655863113154]
CNNs are powerful but it would be computationally costly to directly apply convolutions on point data after voxelizing the entire point clouds to a dense regular 3D grid.
We propose a novel and principled Local Grid Rendering (LGR) operation to render the small neighborhood of a subset of input points into a low-resolution 3D grid independently.
We validate LGR-Net for 3D object detection on the challenging ScanNet and SUN RGB-D datasets.
arXiv Detail & Related papers (2020-07-04T13:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.