Patch-based 3D Natural Scene Generation from a Single Example
- URL: http://arxiv.org/abs/2304.12670v2
- Date: Wed, 26 Apr 2023 10:34:49 GMT
- Title: Patch-based 3D Natural Scene Generation from a Single Example
- Authors: Weiyu Li, Xuelin Chen, Jue Wang, Baoquan Chen
- Abstract summary: We target a 3D generative model for general natural scenes that are typically unique and intricate.
Inspired by classical patch-based image models, we advocate for synthesizing 3D scenes at the patch level, given a single example.
- Score: 35.37200601332951
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We target a 3D generative model for general natural scenes that are typically
unique and intricate. Lacking the necessary volumes of training data, along
with the difficulties of having ad hoc designs in presence of varying scene
characteristics, renders existing setups intractable. Inspired by classical
patch-based image models, we advocate for synthesizing 3D scenes at the patch
level, given a single example. At the core of this work lies important
algorithmic designs w.r.t the scene representation and generative patch
nearest-neighbor module, that address unique challenges arising from lifting
classical 2D patch-based framework to 3D generation. These design choices, on a
collective level, contribute to a robust, effective, and efficient model that
can generate high-quality general natural scenes with both realistic geometric
structure and visual appearance, in large quantities and varieties, as
demonstrated upon a variety of exemplar scenes.
Related papers
- Interactive Scene Authoring with Specialized Generative Primitives [25.378818867764323]
Specialized Generative Primitives is a generative framework that allows non-expert users to author high-quality 3D scenes.
Each primitive is an efficient generative model that captures the distribution of a single exemplar from the real world.
We showcase interactive sessions where various primitives are extracted from real-world scenes and controlled to create 3D assets and scenes in a few minutes.
arXiv Detail & Related papers (2024-12-20T04:39:50Z) - 3D-SceneDreamer: Text-Driven 3D-Consistent Scene Generation [51.64796781728106]
We propose a generative refinement network to synthesize new contents with higher quality by exploiting the natural image prior to 2D diffusion model and the global 3D information of the current scene.
Our approach supports wide variety of scene generation and arbitrary camera trajectories with improved visual quality and 3D consistency.
arXiv Detail & Related papers (2024-03-14T14:31:22Z) - CharacterGen: Efficient 3D Character Generation from Single Images with Multi-View Pose Canonicalization [27.55341255800119]
We present CharacterGen, a framework developed to efficiently generate 3D characters.
A transformer-based, generalizable sparse-view reconstruction model is the other core component of our approach.
We have curated a dataset of anime characters, rendered in multiple poses and views, to train and evaluate our model.
arXiv Detail & Related papers (2024-02-27T05:10:59Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - SceneWiz3D: Towards Text-guided 3D Scene Composition [134.71933134180782]
Existing approaches either leverage large text-to-image models to optimize a 3D representation or train 3D generators on object-centric datasets.
We introduce SceneWiz3D, a novel approach to synthesize high-fidelity 3D scenes from text.
arXiv Detail & Related papers (2023-12-13T18:59:30Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - CC3D: Layout-Conditioned Generation of Compositional 3D Scenes [49.281006972028194]
We introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts.
Our evaluations on synthetic 3D-FRONT and real-world KITTI-360 datasets demonstrate that our model generates scenes of improved visual and geometric quality.
arXiv Detail & Related papers (2023-03-21T17:59:02Z) - GAUDI: A Neural Architect for Immersive 3D Scene Generation [67.97817314857917]
GAUDI is a generative model capable of capturing the distribution of complex and realistic 3D scenes that can be rendered immersively from a moving camera.
We show that GAUDI obtains state-of-the-art performance in the unconditional generative setting across multiple datasets.
arXiv Detail & Related papers (2022-07-27T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.