Upsampling Autoencoder for Self-Supervised Point Cloud Learning
- URL: http://arxiv.org/abs/2203.10768v1
- Date: Mon, 21 Mar 2022 07:20:37 GMT
- Title: Upsampling Autoencoder for Self-Supervised Point Cloud Learning
- Authors: Cheng Zhang, Jian Shi, Xuan Deng, Zizhao Wu
- Abstract summary: We propose a self-supervised pretraining model for point cloud learning without human annotations.
Upsampling operation encourages the network to capture both high-level semantic information and low-level geometric information of the point cloud.
We find that our UAE outperforms previous state-of-the-art methods in shape classification, part segmentation and point cloud upsampling tasks.
- Score: 11.19408173558718
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In computer-aided design (CAD) community, the point cloud data is pervasively
applied in reverse engineering, where the point cloud analysis plays an
important role. While a large number of supervised learning methods have been
proposed to handle the unordered point clouds and demonstrated their remarkable
success, their performance and applicability are limited to the costly data
annotation. In this work, we propose a novel self-supervised pretraining model
for point cloud learning without human annotations, which relies solely on
upsampling operation to perform feature learning of point cloud in an effective
manner. The key premise of our approach is that upsampling operation encourages
the network to capture both high-level semantic information and low-level
geometric information of the point cloud, thus the downstream tasks such as
classification and segmentation will benefit from the pre-trained model.
Specifically, our method first conducts the random subsampling from the input
point cloud at a low proportion e.g., 12.5%. Then, we feed them into an
encoder-decoder architecture, where an encoder is devised to operate only on
the subsampled points, along with a upsampling decoder is adopted to
reconstruct the original point cloud based on the learned features. Finally, we
design a novel joint loss function which enforces the upsampled points to be
similar with the original point cloud and uniformly distributed on the
underlying shape surface. By adopting the pre-trained encoder weights as
initialisation of models for downstream tasks, we find that our UAE outperforms
previous state-of-the-art methods in shape classification, part segmentation
and point cloud upsampling tasks. Code will be made publicly available upon
acceptance.
Related papers
- Point Cloud Novelty Detection Based on Latent Representations of a General Feature Extractor [9.11903730548763]
We propose an effective unsupervised 3D point cloud novelty detection approach, leveraging a general point cloud feature extractor and a one-class classifier.
Compared to existing methods measuring the reconstruction error in 3D coordinate space, our approach utilizes latent representations where the shape information is condensed.
We confirm that our general feature extractor can extract shape features of unseen categories, eliminating the need for autoencoder re-training and reducing the computational burden.
arXiv Detail & Related papers (2024-10-13T14:42:43Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - GeoMAE: Masked Geometric Target Prediction for Self-supervised Point
Cloud Pre-Training [16.825524577372473]
We introduce a point cloud representation learning framework, based on geometric feature reconstruction.
We identify three self-supervised learning objectives to peculiar point clouds, namely centroid prediction, normal estimation, and curvature prediction.
Our pipeline is conceptually simple and it consists of two major steps: first, it randomly masks out groups of points, followed by a Transformer-based point cloud encoder.
arXiv Detail & Related papers (2023-05-15T17:14:55Z) - PointCaM: Cut-and-Mix for Open-Set Point Cloud Learning [72.07350827773442]
We propose to solve open-set point cloud learning using a novel Point Cut-and-Mix mechanism.
We use the Unknown-Point Simulator to simulate out-of-distribution data in the training stage.
The Unknown-Point Estimator module learns to exploit the point cloud's feature context for discriminating the known and unknown data.
arXiv Detail & Related papers (2022-12-05T03:53:51Z) - SoftPool++: An Encoder-Decoder Network for Point Cloud Completion [93.54286830844134]
We propose a novel convolutional operator for the task of point cloud completion.
The proposed operator does not require any max-pooling or voxelization operation.
We show that our approach achieves state-of-the-art performance in shape completion at low and high resolutions.
arXiv Detail & Related papers (2022-05-08T15:31:36Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Point Cloud Pre-training by Mixing and Disentangling [35.18101910728478]
Mixing and Disentangling (MD) is a self-supervised learning approach for point cloud pre-training.
We show that the encoder + ours (MD) significantly surpasses that of the encoder trained from scratch and converges quickly.
We hope this self-supervised learning attempt on point clouds can pave the way for reducing the deeply-learned model dependence on large-scale labeled data.
arXiv Detail & Related papers (2021-09-01T15:52:18Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Refinement of Predicted Missing Parts Enhance Point Cloud Completion [62.997667081978825]
Point cloud completion is the task of predicting complete geometry from partial observations using a point set representation for a 3D shape.
Previous approaches propose neural networks to directly estimate the whole point cloud through encoder-decoder models fed by the incomplete point set.
This paper proposes an end-to-end neural network architecture that focuses on computing the missing geometry and merging the known input and the predicted point cloud.
arXiv Detail & Related papers (2020-10-08T22:01:23Z) - Unsupervised Point Cloud Pre-Training via Occlusion Completion [18.42664414305454]
We describe a simple pre-training approach for point clouds.
It works in three steps: Mask all points occluded in a camera view; 2. Learn an encoder-decoder model to reconstruct the occluded points; 3. Use the encoder weights as initialisation for downstream point cloud tasks.
arXiv Detail & Related papers (2020-10-02T16:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.