Diffusion Probabilistic Models for 3D Point Cloud Generation
- URL: http://arxiv.org/abs/2103.01458v1
- Date: Tue, 2 Mar 2021 03:56:02 GMT
- Title: Diffusion Probabilistic Models for 3D Point Cloud Generation
- Authors: Shitong Luo, Wei Hu
- Abstract summary: We present a probabilistic model for point cloud generation that is critical for various 3D vision tasks.
Inspired by the diffusion process in non-equilibrium thermodynamics, we view points in point clouds as particles in a thermodynamic system in contact with a heat bath.
We derive the variational bound in closed form for training and provide implementations of the model.
- Score: 12.257593992442732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a probabilistic model for point cloud generation, which is
critical for various 3D vision tasks such as shape completion, upsampling,
synthesis and data augmentation. Inspired by the diffusion process in
non-equilibrium thermodynamics, we view points in point clouds as particles in
a thermodynamic system in contact with a heat bath, which diffuse from the
original distribution to a noise distribution. Point cloud generation thus
amounts to learning the reverse diffusion process that transforms the noise
distribution to the distribution of a desired shape. Specifically, we propose
to model the reverse diffusion process for point clouds as a Markov chain
conditioned on certain shape latent. We derive the variational bound in closed
form for training and provide implementations of the model. Experimental
results demonstrate that our model achieves the state-of-the-art performance in
point cloud generation and auto-encoding. The code is available at
\url{https://github.com/luost26/diffusion-point-cloud}.
Related papers
- Point Cloud Resampling with Learnable Heat Diffusion [58.050130177241186]
We propose a learnable heat diffusion framework for point cloud resampling.
Unlike previous diffusion models with a fixed prior, the adaptive conditional prior selectively preserves geometric features of the point cloud.
arXiv Detail & Related papers (2024-11-21T13:44:18Z) - Enhancing Diffusion-based Point Cloud Generation with Smoothness Constraint [5.140589325829964]
Diffusion models have been popular for point cloud generation tasks.
We propose incorporating the local smoothness constraint into the diffusion framework for point cloud generation.
Experiments demonstrate the proposed model can generate realistic shapes and smoother point clouds, outperforming multiple state-of-the-art methods.
arXiv Detail & Related papers (2024-04-03T01:55:15Z) - Lecture Notes in Probabilistic Diffusion Models [0.5361320134021585]
Diffusion models are loosely modelled based on non-equilibrium thermodynamics.
The diffusion model learns the data manifold to which the original and thus the reconstructed data samples belong.
Diffusion models have -- unlike variational autoencoder and flow models -- latent variables with the same dimensionality as the original data.
arXiv Detail & Related papers (2023-12-16T09:36:54Z) - PCRDiffusion: Diffusion Probabilistic Models for Point Cloud
Registration [28.633279452622475]
We propose a new framework that formulates point cloud registration as a denoising diffusion process from noisy transformation to object transformation.
During training stage, object transformation diffuses from ground-truth transformation to random distribution, and the model learns to reverse this noising process.
In sampling stage, the model refines randomly generated transformation to the output result in a progressive way.
arXiv Detail & Related papers (2023-12-11T01:56:42Z) - DiffFacto: Controllable Part-Based 3D Point Cloud Generation with Cross
Diffusion [68.39543754708124]
We introduce DiffFacto, a novel probabilistic generative model that learns the distribution of shapes with part-level control.
Experiments show that our method is able to generate novel shapes with multiple axes of control.
It achieves state-of-the-art part-level generation quality and generates plausible and coherent shapes.
arXiv Detail & Related papers (2023-05-03T06:38:35Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion
Probabilistic Models [58.357180353368896]
We propose a conditional paradigm that benefits from the denoising diffusion probabilistic model (DDPM) to tackle the problem of realistic and diverse action-conditioned 3D skeleton-based motion generation.
We are a pioneering attempt that uses DDPM to synthesize a variable number of motion sequences conditioned on a categorical action.
arXiv Detail & Related papers (2023-01-10T13:15:42Z) - OCD: Learning to Overfit with Conditional Diffusion Models [95.1828574518325]
We present a dynamic model in which the weights are conditioned on an input sample x.
We learn to match those weights that would be obtained by finetuning a base model on x and its label y.
arXiv Detail & Related papers (2022-10-02T09:42:47Z) - 3D Shape Generation and Completion through Point-Voxel Diffusion [24.824065748889048]
We propose a novel approach for probabilistic generative modeling of 3D shapes.
Point-Voxel Diffusion (PVD) is a unified, probabilistic formulation for unconditional shape generation and conditional, multimodal shape completion.
PVD can be viewed as a series of denoising steps, reversing the diffusion process from observed point cloud data to Gaussian noise, and is trained by optimizing a variational lower bound to the (conditional) likelihood function.
arXiv Detail & Related papers (2021-04-08T10:38:03Z) - Learning Gradient Fields for Shape Generation [69.85355757242075]
A point cloud can be viewed as samples from a distribution of 3D points whose density is concentrated near the surface of the shape.
We generate point clouds by performing gradient ascent on an unnormalized probability density.
Our model directly predicts the gradient of the log density field and can be trained with a simple objective adapted from score-based generative models.
arXiv Detail & Related papers (2020-08-14T18:06:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.