Representing Point Clouds with Generative Conditional Invertible Flow
Networks
- URL: http://arxiv.org/abs/2010.11087v1
- Date: Wed, 7 Oct 2020 18:30:47 GMT
- Title: Representing Point Clouds with Generative Conditional Invertible Flow
Networks
- Authors: Micha{\l} Stypu{\l}kowski, Kacper Kania, Maciej Zamorski, Maciej
Zi\k{e}ba, Tomasz Trzci\'nski, Jan Chorowski
- Abstract summary: We propose a simple yet effective method to represent point clouds as sets of samples drawn from a cloud-specific probability distribution.
We show that our method leverages generative invertible flow networks to learn embeddings as well as to generate point clouds.
Our model offers competitive or superior quantitative results on benchmark datasets.
- Score: 15.280751949071016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a simple yet effective method to represent point
clouds as sets of samples drawn from a cloud-specific probability distribution.
This interpretation matches intrinsic characteristics of point clouds: the
number of points and their ordering within a cloud is not important as all
points are drawn from the proximity of the object boundary. We postulate to
represent each cloud as a parameterized probability distribution defined by a
generative neural network. Once trained, such a model provides a natural
framework for point cloud manipulation operations, such as aligning a new cloud
into a default spatial orientation. To exploit similarities between same-class
objects and to improve model performance, we turn to weight sharing: networks
that model densities of points belonging to objects in the same family share
all parameters with the exception of a small, object-specific embedding vector.
We show that these embedding vectors capture semantic relationships between
objects. Our method leverages generative invertible flow networks to learn
embeddings as well as to generate point clouds. Thanks to this formulation and
contrary to similar approaches, we are able to train our model in an end-to-end
fashion. As a result, our model offers competitive or superior quantitative
results on benchmark datasets, while enabling unprecedented capabilities to
perform cloud manipulation tasks, such as point cloud registration and
regeneration, by a generative network.
Related papers
- Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - GP-PCS: One-shot Feature-Preserving Point Cloud Simplification with Gaussian Processes on Riemannian Manifolds [2.8811433060309763]
We propose a novel, one-shot point cloud simplification method.
It preserves both the salient structural features and the overall shape of a point cloud without any prior surface reconstruction step.
We evaluate our method on several benchmark and self-acquired point clouds, compare it to a range of existing methods, demonstrate its application in downstream tasks of registration and surface reconstruction.
arXiv Detail & Related papers (2023-03-27T14:05:34Z) - Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models [105.83595545314334]
We design a novel sparse latent point diffusion model for mesh generation.
Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead.
Our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability.
arXiv Detail & Related papers (2023-03-14T14:25:29Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Upsampling Autoencoder for Self-Supervised Point Cloud Learning [11.19408173558718]
We propose a self-supervised pretraining model for point cloud learning without human annotations.
Upsampling operation encourages the network to capture both high-level semantic information and low-level geometric information of the point cloud.
We find that our UAE outperforms previous state-of-the-art methods in shape classification, part segmentation and point cloud upsampling tasks.
arXiv Detail & Related papers (2022-03-21T07:20:37Z) - Refinement of Predicted Missing Parts Enhance Point Cloud Completion [62.997667081978825]
Point cloud completion is the task of predicting complete geometry from partial observations using a point set representation for a 3D shape.
Previous approaches propose neural networks to directly estimate the whole point cloud through encoder-decoder models fed by the incomplete point set.
This paper proposes an end-to-end neural network architecture that focuses on computing the missing geometry and merging the known input and the predicted point cloud.
arXiv Detail & Related papers (2020-10-08T22:01:23Z) - Self-Sampling for Neural Point Cloud Consolidation [83.31236364265403]
We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud.
We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network.
We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.
arXiv Detail & Related papers (2020-08-14T17:16:02Z) - DeepCLR: Correspondence-Less Architecture for Deep End-to-End Point
Cloud Registration [12.471564670462344]
This work addresses the problem of point cloud registration using deep neural networks.
We propose an approach to predict the alignment between two point clouds with overlapping data content, but displaced origins.
Our approach achieves state-of-the-art accuracy and the lowest run-time of the compared methods.
arXiv Detail & Related papers (2020-07-22T08:20:57Z) - TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly
Representations [20.318695890515613]
We propose an autoencoder, TearingNet, which tackles the challenging task of representing point clouds using a fixed-length descriptor.
Our TearingNet is characterized by a proposed Tearing network module and a Folding network module interacting with each other iteratively.
Experimentation shows the superiority of our proposal in terms of reconstructing point clouds as well as generating more topology-friendly representations than benchmarks.
arXiv Detail & Related papers (2020-06-17T22:42:43Z) - Generative PointNet: Deep Energy-Based Learning on Unordered Point Sets
for 3D Generation, Reconstruction and Classification [136.57669231704858]
We propose a generative model of unordered point sets, such as point clouds, in the form of an energy-based model.
We call our model the Generative PointNet because it can be derived from the discriminative PointNet.
arXiv Detail & Related papers (2020-04-02T23:08:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.