SALAD: Part-Level Latent Diffusion for 3D Shape Generation and Manipulation
- URL: http://arxiv.org/abs/2303.12236v2
- Date: Wed, 20 Mar 2024 04:47:38 GMT
- Title: SALAD: Part-Level Latent Diffusion for 3D Shape Generation and Manipulation
- Authors: Juil Koo, Seungwoo Yoo, Minh Hieu Nguyen, Minhyuk Sung,
- Abstract summary: We present a cascaded diffusion model based on a part-level implicit 3D representation.
Our model achieves state-of-the-art generation quality and also enables part-level shape editing and manipulation without any additional training in conditional setup.
- Score: 11.828311976126301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a cascaded diffusion model based on a part-level implicit 3D representation. Our model achieves state-of-the-art generation quality and also enables part-level shape editing and manipulation without any additional training in conditional setup. Diffusion models have demonstrated impressive capabilities in data generation as well as zero-shot completion and editing via a guided reverse process. Recent research on 3D diffusion models has focused on improving their generation capabilities with various data representations, while the absence of structural information has limited their capability in completion and editing tasks. We thus propose our novel diffusion model using a part-level implicit representation. To effectively learn diffusion with high-dimensional embedding vectors of parts, we propose a cascaded framework, learning diffusion first on a low-dimensional subspace encoding extrinsic parameters of parts and then on the other high-dimensional subspace encoding intrinsic attributes. In the experiments, we demonstrate the outperformance of our method compared with the previous ones both in generation and part-level completion and manipulation tasks.
Related papers
- DreamBeast: Distilling 3D Fantastical Animals with Part-Aware Knowledge Transfer [26.36250599941058]
We present DreamBeast, a novel method for generating 3D animal assets composed of distinct parts.
For each generated asset, we efficiently extract part-level knowledge from the Stable Diffusion 3 model.
We then use to modulate the guidance of a multi-view diffusion model during SDS to create 3D assets of fantastical animals.
arXiv Detail & Related papers (2024-09-12T17:58:31Z) - Text-to-Image Rectified Flow as Plug-and-Play Priors [52.586838532560755]
Rectified flow is a novel class of generative models that enforces a linear progression from the source to the target distribution.
We show that rectified flow approaches surpass in terms of generation quality and efficiency, requiring fewer inference steps.
Our method also displays competitive performance in image inversion and editing.
arXiv Detail & Related papers (2024-06-05T14:02:31Z) - Part-aware Shape Generation with Latent 3D Diffusion of Neural Voxel Fields [50.12118098874321]
We introduce a latent 3D diffusion process for neural voxel fields, enabling generation at significantly higher resolutions.
A part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding the accurate part decomposition.
The results demonstrate the superior generative capabilities of our proposed method in part-aware shape generation, outperforming existing state-of-the-art methods.
arXiv Detail & Related papers (2024-05-02T04:31:17Z) - SC-Diff: 3D Shape Completion with Latent Diffusion Models [4.913210912019975]
This paper introduces a 3D shape completion approach using a 3D latent diffusion model optimized for completing shapes.
Our method combines image-based conditioning through cross-attention and spatial conditioning through the integration of 3D features from captured partial scans.
arXiv Detail & Related papers (2024-03-19T06:01:11Z) - Bayesian Diffusion Models for 3D Shape Reconstruction [54.69889488052155]
We present a prediction algorithm that performs effective Bayesian inference by tightly coupling the top-down (prior) information with the bottom-up (data-driven) procedure.
We show the effectiveness of BDM on the 3D shape reconstruction task.
arXiv Detail & Related papers (2024-03-11T17:55:53Z) - VolumeDiffusion: Flexible Text-to-3D Generation with Efficient Volumetric Encoder [56.59814904526965]
This paper introduces a pioneering 3D encoder designed for text-to-3D generation.
A lightweight network is developed to efficiently acquire feature volumes from multi-view images.
The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net.
arXiv Detail & Related papers (2023-12-18T18:59:05Z) - Diffusion-SS3D: Diffusion Model for Semi-supervised 3D Object Detection [77.23918785277404]
We present Diffusion-SS3D, a new perspective of enhancing the quality of pseudo-labels via the diffusion model for semi-supervised 3D object detection.
Specifically, we include noises to produce corrupted 3D object size and class label, distributions, and then utilize the diffusion model as a denoising process to obtain bounding box outputs.
We conduct experiments on the ScanNet and SUN RGB-D benchmark datasets to demonstrate that our approach achieves state-of-the-art performance against existing methods.
arXiv Detail & Related papers (2023-12-05T18:54:03Z) - Generative Novel View Synthesis with 3D-Aware Diffusion Models [96.78397108732233]
We present a diffusion-based model for 3D-aware generative novel view synthesis from as few as a single input image.
Our method makes use of existing 2D diffusion backbones but, crucially, incorporates geometry priors in the form of a 3D feature volume.
In addition to generating novel views, our method has the ability to autoregressively synthesize 3D-consistent sequences.
arXiv Detail & Related papers (2023-04-05T17:15:47Z) - 3D Neural Field Generation using Triplane Diffusion [37.46688195622667]
We present an efficient diffusion-based model for 3D-aware generation of neural fields.
Our approach pre-processes training data, such as ShapeNet meshes, by converting them to continuous occupancy fields.
We demonstrate state-of-the-art results on 3D generation on several object classes from ShapeNet.
arXiv Detail & Related papers (2022-11-30T01:55:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.