Compositional Prototype Network with Multi-view Comparision for Few-Shot
Point Cloud Semantic Segmentation
- URL: http://arxiv.org/abs/2012.14255v1
- Date: Mon, 28 Dec 2020 15:01:34 GMT
- Title: Compositional Prototype Network with Multi-view Comparision for Few-Shot
Point Cloud Semantic Segmentation
- Authors: Xiaoyu Chen, Chi Zhang, Guosheng Lin, Jing Han
- Abstract summary: A fully supervised point cloud segmentation network often requires a large amount of data with point-wise annotations.
We present the Compositional Prototype Network that can undertake point cloud segmentation with only a few labeled training data.
Inspired by the few-shot learning literature in images, our network directly transfers label information from the limited training data to unlabeled test data for prediction.
- Score: 47.0611707526858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud segmentation is a fundamental visual understanding task in 3D
vision. A fully supervised point cloud segmentation network often requires a
large amount of data with point-wise annotations, which is expensive to obtain.
In this work, we present the Compositional Prototype Network that can undertake
point cloud segmentation with only a few labeled training data. Inspired by the
few-shot learning literature in images, our network directly transfers label
information from the limited training data to unlabeled test data for
prediction. The network decomposes the representations of complex point cloud
data into a set of local regional representations and utilizes them to
calculate the compositional prototypes of a visual concept. Our network
includes a key Multi-View Comparison Component that exploits the redundant
views of the support set. To evaluate the proposed method, we create a new
segmentation benchmark dataset, ScanNet-$6^i$, which is built upon ScanNet
dataset. Extensive experiments show that our method outperforms baselines with
a significant advantage. Moreover, when we use our network to handle the
long-tail problem in a fully supervised point cloud segmentation dataset, it
can also effectively boost the performance of the few-shot classes.
Related papers
- Collect-and-Distribute Transformer for 3D Point Cloud Analysis [82.03517861433849]
We propose a new transformer network equipped with a collect-and-distribute mechanism to communicate short- and long-range contexts of point clouds.
Results show the effectiveness of the proposed CDFormer, delivering several new state-of-the-art performances on point cloud classification and segmentation tasks.
arXiv Detail & Related papers (2023-06-02T03:48:45Z) - Few-Shot 3D Point Cloud Semantic Segmentation via Stratified
Class-Specific Attention Based Transformer Network [22.9434434107516]
We develop a new multi-layer transformer network for few-shot point cloud semantic segmentation.
Our method achieves the new state-of-the-art performance, with 15% less inference time, over existing few-shot 3D point cloud segmentation models.
arXiv Detail & Related papers (2023-03-28T00:27:54Z) - Point-Unet: A Context-aware Point-based Neural Network for Volumetric
Segmentation [18.81644604997336]
We propose Point-Unet, a novel method that incorporates the efficiency of deep learning with 3D point clouds into volumetric segmentation.
Our key idea is to first predict the regions of interest in the volume by learning an attentional probability map.
A comprehensive benchmark on different metrics has shown that our context-aware Point-Unet robustly outperforms the SOTA voxel-based networks.
arXiv Detail & Related papers (2022-03-16T22:02:08Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - PnP-3D: A Plug-and-Play for 3D Point Clouds [38.05362492645094]
We propose a plug-and-play module, -3D, to improve the effectiveness of existing networks in analyzing point cloud data.
To thoroughly evaluate our approach, we conduct experiments on three standard point cloud analysis tasks.
In addition to achieving state-of-the-art results, we present comprehensive studies to demonstrate our approach's advantages.
arXiv Detail & Related papers (2021-08-16T23:59:43Z) - Learning point embedding for 3D data processing [2.12121796606941]
Current point-based methods are essentially spatial relationship processing networks.
Our architecture, PE-Net, learns the representation of point clouds in high-dimensional space.
Experiments show that PE-Net achieves the state-of-the-art performance in multiple challenging datasets.
arXiv Detail & Related papers (2021-07-19T00:25:28Z) - Few-shot 3D Point Cloud Semantic Segmentation [138.80825169240302]
We propose a novel attention-aware multi-prototype transductive few-shot point cloud semantic segmentation method.
Our proposed method shows significant and consistent improvements compared to baselines in different few-shot point cloud semantic segmentation settings.
arXiv Detail & Related papers (2020-06-22T08:05:25Z) - Multi-Path Region Mining For Weakly Supervised 3D Semantic Segmentation
on Point Clouds [67.0904905172941]
We propose a weakly supervised approach to predict point-level results using weak labels on 3D point clouds.
To the best of our knowledge, this is the first method that uses cloud-level weak labels on raw 3D space to train a point cloud semantic segmentation network.
arXiv Detail & Related papers (2020-03-29T14:13:29Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.