Unsupervised Point Cloud Pre-Training via Occlusion Completion
- URL: http://arxiv.org/abs/2010.01089v3
- Date: Wed, 13 Oct 2021 23:15:38 GMT
- Title: Unsupervised Point Cloud Pre-Training via Occlusion Completion
- Authors: Hanchen Wang, Qi Liu, Xiangyu Yue, Joan Lasenby, Matthew J. Kusner
- Abstract summary: We describe a simple pre-training approach for point clouds.
It works in three steps: Mask all points occluded in a camera view; 2. Learn an encoder-decoder model to reconstruct the occluded points; 3. Use the encoder weights as initialisation for downstream point cloud tasks.
- Score: 18.42664414305454
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We describe a simple pre-training approach for point clouds. It works in
three steps: 1. Mask all points occluded in a camera view; 2. Learn an
encoder-decoder model to reconstruct the occluded points; 3. Use the encoder
weights as initialisation for downstream point cloud tasks. We find that even
when we construct a single pre-training dataset (from ModelNet40), this
pre-training method improves accuracy across different datasets and encoders,
on a wide range of downstream tasks. Specifically, we show that our method
outperforms previous pre-training methods in object classification, and both
part-based and semantic segmentation tasks. We study the pre-trained features
and find that they lead to wide downstream minima, have high transformation
invariance, and have activations that are highly correlated with part labels.
Code and data are available at: https://github.com/hansen7/OcCo
Related papers
- PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - GeoMAE: Masked Geometric Target Prediction for Self-supervised Point
Cloud Pre-Training [16.825524577372473]
We introduce a point cloud representation learning framework, based on geometric feature reconstruction.
We identify three self-supervised learning objectives to peculiar point clouds, namely centroid prediction, normal estimation, and curvature prediction.
Our pipeline is conceptually simple and it consists of two major steps: first, it randomly masks out groups of points, followed by a Transformer-based point cloud encoder.
arXiv Detail & Related papers (2023-05-15T17:14:55Z) - FreePoint: Unsupervised Point Cloud Instance Segmentation [72.64540130803687]
We propose FreePoint, for underexplored unsupervised class-agnostic instance segmentation on point clouds.
We represent point features by combining coordinates, colors, and self-supervised deep features.
Based on the point features, we segment point clouds into coarse instance masks as pseudo labels, which are used to train a point cloud instance segmentation model.
arXiv Detail & Related papers (2023-05-11T16:56:26Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - CloudAttention: Efficient Multi-Scale Attention Scheme For 3D Point
Cloud Learning [81.85951026033787]
We set transformers in this work and incorporate them into a hierarchical framework for shape classification and part and scene segmentation.
We also compute efficient and dynamic global cross attentions by leveraging sampling and grouping at each iteration.
The proposed hierarchical model achieves state-of-the-art shape classification in mean accuracy and yields results on par with the previous segmentation methods.
arXiv Detail & Related papers (2022-07-31T21:39:15Z) - Upsampling Autoencoder for Self-Supervised Point Cloud Learning [11.19408173558718]
We propose a self-supervised pretraining model for point cloud learning without human annotations.
Upsampling operation encourages the network to capture both high-level semantic information and low-level geometric information of the point cloud.
We find that our UAE outperforms previous state-of-the-art methods in shape classification, part segmentation and point cloud upsampling tasks.
arXiv Detail & Related papers (2022-03-21T07:20:37Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - Point Cloud Pre-training by Mixing and Disentangling [35.18101910728478]
Mixing and Disentangling (MD) is a self-supervised learning approach for point cloud pre-training.
We show that the encoder + ours (MD) significantly surpasses that of the encoder trained from scratch and converges quickly.
We hope this self-supervised learning attempt on point clouds can pave the way for reducing the deeply-learned model dependence on large-scale labeled data.
arXiv Detail & Related papers (2021-09-01T15:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.