3D Shape Generation and Completion through Point-Voxel Diffusion
- URL: http://arxiv.org/abs/2104.03670v2
- Date: Sun, 11 Apr 2021 22:11:25 GMT
- Title: 3D Shape Generation and Completion through Point-Voxel Diffusion
- Authors: Linqi Zhou, Yilun Du, Jiajun Wu
- Abstract summary: We propose a novel approach for probabilistic generative modeling of 3D shapes.
Point-Voxel Diffusion (PVD) is a unified, probabilistic formulation for unconditional shape generation and conditional, multimodal shape completion.
PVD can be viewed as a series of denoising steps, reversing the diffusion process from observed point cloud data to Gaussian noise, and is trained by optimizing a variational lower bound to the (conditional) likelihood function.
- Score: 24.824065748889048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel approach for probabilistic generative modeling of 3D
shapes. Unlike most existing models that learn to deterministically translate a
latent vector to a shape, our model, Point-Voxel Diffusion (PVD), is a unified,
probabilistic formulation for unconditional shape generation and conditional,
multi-modal shape completion. PVD marries denoising diffusion models with the
hybrid, point-voxel representation of 3D shapes. It can be viewed as a series
of denoising steps, reversing the diffusion process from observed point cloud
data to Gaussian noise, and is trained by optimizing a variational lower bound
to the (conditional) likelihood function. Experiments demonstrate that PVD is
capable of synthesizing high-fidelity shapes, completing partial point clouds,
and generating multiple completion results from single-view depth scans of real
objects.
Related papers
- Deformable 3D Shape Diffusion Model [21.42513407755273]
We introduce a novel deformable 3D shape diffusion model that facilitates comprehensive 3D shape manipulation.
We demonstrate state-of-the-art performance in point cloud generation and competitive results in mesh deformation.
Our method presents a unique pathway for advancing 3D shape manipulation and unlocking new opportunities in the realm of virtual reality.
arXiv Detail & Related papers (2024-07-31T08:24:42Z) - Consistent3D: Towards Consistent High-Fidelity Text-to-3D Generation with Deterministic Sampling Prior [87.55592645191122]
Score distillation sampling (SDS) and its variants have greatly boosted the development of text-to-3D generation, but are vulnerable to geometry collapse and poor textures yet.
We propose a novel and effective "Consistent3D" method that explores the ODE deterministic sampling prior for text-to-3D generation.
Experimental results show the efficacy of our Consistent3D in generating high-fidelity and diverse 3D objects and large-scale scenes.
arXiv Detail & Related papers (2024-01-17T08:32:07Z) - PolyDiff: Generating 3D Polygonal Meshes with Diffusion Models [15.846449180313778]
PolyDiff is the first diffusion-based approach capable of directly generating realistic and diverse 3D polygonal meshes.
Our model is capable of producing high-quality 3D polygonal meshes, ready for integration into downstream 3D.
arXiv Detail & Related papers (2023-12-18T18:19:26Z) - DiffComplete: Diffusion-based Generative 3D Shape Completion [114.43353365917015]
We introduce a new diffusion-based approach for shape completion on 3D range scans.
We strike a balance between realism, multi-modality, and high fidelity.
DiffComplete sets a new SOTA performance on two large-scale 3D shape completion benchmarks.
arXiv Detail & Related papers (2023-06-28T16:07:36Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and
Manipulation [54.09274684734721]
We present a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
We may jointly train an encoder network to learn a latent space for inverting shapes, allowing us to enable a rich variety of whole-shape and region-aware shape manipulations.
arXiv Detail & Related papers (2023-02-01T02:47:53Z) - Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion
Probabilistic Models [58.357180353368896]
We propose a conditional paradigm that benefits from the denoising diffusion probabilistic model (DDPM) to tackle the problem of realistic and diverse action-conditioned 3D skeleton-based motion generation.
We are a pioneering attempt that uses DDPM to synthesize a variable number of motion sequences conditioned on a categorical action.
arXiv Detail & Related papers (2023-01-10T13:15:42Z) - Neural Wavelet-domain Diffusion for 3D Shape Generation [52.038346313823524]
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets.
arXiv Detail & Related papers (2022-09-19T02:51:48Z) - Diffusion Probabilistic Models for 3D Point Cloud Generation [12.257593992442732]
We present a probabilistic model for point cloud generation that is critical for various 3D vision tasks.
Inspired by the diffusion process in non-equilibrium thermodynamics, we view points in point clouds as particles in a thermodynamic system in contact with a heat bath.
We derive the variational bound in closed form for training and provide implementations of the model.
arXiv Detail & Related papers (2021-03-02T03:56:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.