Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models
- URL: http://arxiv.org/abs/2303.07938v2
- Date: Wed, 15 Mar 2023 03:13:08 GMT
- Title: Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models
- Authors: Zhaoyang Lyu, Jinyi Wang, Yuwei An, Ya Zhang, Dahua Lin, Bo Dai
- Abstract summary: We design a novel sparse latent point diffusion model for mesh generation.
Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead.
Our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability.
- Score: 105.83595545314334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mesh generation is of great value in various applications involving computer
graphics and virtual content, yet designing generative models for meshes is
challenging due to their irregular data structure and inconsistent topology of
meshes in the same category. In this work, we design a novel sparse latent
point diffusion model for mesh generation. Our key insight is to regard point
clouds as an intermediate representation of meshes, and model the distribution
of point clouds instead. While meshes can be generated from point clouds via
techniques like Shape as Points (SAP), the challenges of directly generating
meshes can be effectively avoided. To boost the efficiency and controllability
of our mesh generation method, we propose to further encode point clouds to a
set of sparse latent points with point-wise semantic meaningful features, where
two DDPMs are trained in the space of sparse latent points to respectively
model the distribution of the latent point positions and features at these
latent points. We find that sampling in this latent space is faster than
directly sampling dense point clouds. Moreover, the sparse latent points also
enable us to explicitly control both the overall structures and local details
of the generated meshes. Extensive experiments are conducted on the ShapeNet
dataset, where our proposed sparse latent point diffusion model achieves
superior performance in terms of generation quality and controllability when
compared to existing methods.
Related papers
- Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - PU-Flow: a Point Cloud Upsampling Networkwith Normalizing Flows [58.96306192736593]
We present PU-Flow, which incorporates normalizing flows and feature techniques to produce dense points uniformly distributed on the underlying surface.
Specifically, we formulate the upsampling process as point in a latent space, where the weights are adaptively learned from local geometric context.
We show that our method outperforms state-of-the-art deep learning-based approaches in terms of reconstruction quality, proximity-to-surface accuracy, and computation efficiency.
arXiv Detail & Related papers (2021-07-13T07:45:48Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Representing Point Clouds with Generative Conditional Invertible Flow
Networks [15.280751949071016]
We propose a simple yet effective method to represent point clouds as sets of samples drawn from a cloud-specific probability distribution.
We show that our method leverages generative invertible flow networks to learn embeddings as well as to generate point clouds.
Our model offers competitive or superior quantitative results on benchmark datasets.
arXiv Detail & Related papers (2020-10-07T18:30:47Z) - Learning Gradient Fields for Shape Generation [69.85355757242075]
A point cloud can be viewed as samples from a distribution of 3D points whose density is concentrated near the surface of the shape.
We generate point clouds by performing gradient ascent on an unnormalized probability density.
Our model directly predicts the gradient of the log density field and can be trained with a simple objective adapted from score-based generative models.
arXiv Detail & Related papers (2020-08-14T18:06:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.