LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes
- URL: http://arxiv.org/abs/2311.13384v2
- Date: Thu, 23 Nov 2023 11:40:16 GMT
- Title: LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes
- Authors: Jaeyoung Chung, Suyoung Lee, Hyeongjin Nam, Jaerin Lee, Kyoung Mu Lee
- Abstract summary: Existing 3D scene generation models, however, limit the target scene to specific domain.
We propose LucidDreamer, a domain-free scene generation pipeline.
LucidDreamer produces highly-detailed Gaussian splats with no constraint on domain of the target scene.
- Score: 52.31402192831474
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the widespread usage of VR devices and contents, demands for 3D scene
generation techniques become more popular. Existing 3D scene generation models,
however, limit the target scene to specific domain, primarily due to their
training strategies using 3D scan dataset that is far from the real-world. To
address such limitation, we propose LucidDreamer, a domain-free scene
generation pipeline by fully leveraging the power of existing large-scale
diffusion-based generative model. Our LucidDreamer has two alternate steps:
Dreaming and Alignment. First, to generate multi-view consistent images from
inputs, we set the point cloud as a geometrical guideline for each image
generation. Specifically, we project a portion of point cloud to the desired
view and provide the projection as a guidance for inpainting using the
generative model. The inpainted images are lifted to 3D space with estimated
depth maps, composing a new points. Second, to aggregate the new points into
the 3D scene, we propose an aligning algorithm which harmoniously integrates
the portions of newly generated 3D scenes. The finally obtained 3D scene serves
as initial points for optimizing Gaussian splats. LucidDreamer produces
Gaussian splats that are highly-detailed compared to the previous 3D scene
generation methods, with no constraint on domain of the target scene. Project
page: https://luciddreamer-cvlab.github.io/
Related papers
- SceneDreamer360: Text-Driven 3D-Consistent Scene Generation with Panoramic Gaussian Splatting [53.32467009064287]
We propose a text-driven 3D-consistent scene generation model: SceneDreamer360.
Our proposed method leverages a text-driven panoramic image generation model as a prior for 3D scene generation.
Our experiments demonstrate that SceneDreamer360 with its panoramic image generation and 3DGS can produce higher quality, spatially consistent, and visually appealing 3D scenes from any text prompt.
arXiv Detail & Related papers (2024-08-25T02:56:26Z) - LoopGaussian: Creating 3D Cinemagraph with Multi-view Images via Eulerian Motion Field [13.815932949774858]
Cinemagraph is a form of visual media that combines elements of still photography and subtle motion to create a captivating experience.
We propose LoopGaussian to elevate cinemagraph from 2D image space to 3D space using 3D Gaussian modeling.
Experiment results validate the effectiveness of our approach, demonstrating high-quality and visually appealing scene generation.
arXiv Detail & Related papers (2024-04-13T11:07:53Z) - DreamScene360: Unconstrained Text-to-3D Scene Generation with Panoramic Gaussian Splatting [56.101576795566324]
We present a text-to-3D 360$circ$ scene generation pipeline.
Our approach utilizes the generative power of a 2D diffusion model and prompt self-refinement.
Our method offers a globally consistent 3D scene within a 360$circ$ perspective.
arXiv Detail & Related papers (2024-04-10T10:46:59Z) - 3D-SceneDreamer: Text-Driven 3D-Consistent Scene Generation [51.64796781728106]
We propose a generative refinement network to synthesize new contents with higher quality by exploiting the natural image prior to 2D diffusion model and the global 3D information of the current scene.
Our approach supports wide variety of scene generation and arbitrary camera trajectories with improved visual quality and 3D consistency.
arXiv Detail & Related papers (2024-03-14T14:31:22Z) - GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting [52.150502668874495]
We present GALA3D, generative 3D GAussians with LAyout-guided control, for effective compositional text-to-3D generation.
GALA3D is a user-friendly, end-to-end framework for state-of-the-art scene-level 3D content generation and controllable editing.
arXiv Detail & Related papers (2024-02-11T13:40:08Z) - SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections [49.802462165826554]
We present SceneDreamer, an unconditional generative model for unbounded 3D scenes.
Our framework is learned from in-the-wild 2D image collections only, without any 3D annotations.
arXiv Detail & Related papers (2023-02-02T18:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.