Proc-GS: Procedural Building Generation for City Assembly with 3D Gaussians
- URL: http://arxiv.org/abs/2412.07660v1
- Date: Tue, 10 Dec 2024 16:45:32 GMT
- Title: Proc-GS: Procedural Building Generation for City Assembly with 3D Gaussians
- Authors: Yixuan Li, Xingjian Ran, Linning Xu, Tao Lu, Mulin Yu, Zhenzhi Wang, Yuanbo Xiangli, Dahua Lin, Bo Dai,
- Abstract summary: Building asset creation is labor-intensive and requires specialized skills to develop design rules.
Recent generative models for building creation often overlook these patterns, leading to low visual fidelity and limited scalability.
By manipulating procedural code, we can streamline this process and generate an infinite variety of buildings.
- Score: 65.09942210464747
- License:
- Abstract: Buildings are primary components of cities, often featuring repeated elements such as windows and doors. Traditional 3D building asset creation is labor-intensive and requires specialized skills to develop design rules. Recent generative models for building creation often overlook these patterns, leading to low visual fidelity and limited scalability. Drawing inspiration from procedural modeling techniques used in the gaming and visual effects industry, our method, Proc-GS, integrates procedural code into the 3D Gaussian Splatting (3D-GS) framework, leveraging their advantages in high-fidelity rendering and efficient asset management from both worlds. By manipulating procedural code, we can streamline this process and generate an infinite variety of buildings. This integration significantly reduces model size by utilizing shared foundational assets, enabling scalable generation with precise control over building assembly. We showcase the potential for expansive cityscape generation while maintaining high rendering fidelity and precise control on both real and synthetic cases.
Related papers
- CityX: Controllable Procedural Content Generation for Unbounded 3D Cities [50.10101235281943]
Current generative methods fall short in either diversity, controllability, or fidelity.
In this work, we resort to the procedural content generation (PCG) technique for high-fidelity generation.
We develop a multi-agent framework to transform multi-modal instructions, including OSM, semantic maps, and satellite images, into executable programs.
Our method, named CityX, demonstrates its superiority in creating diverse, controllable, and realistic 3D urban scenes.
arXiv Detail & Related papers (2024-07-24T18:05:13Z) - Enhancement of 3D Gaussian Splatting using Raw Mesh for Photorealistic Recreation of Architectures [12.96911281844627]
We propose a method to harness raw 3D models to guide 3D Gaussians in capturing the basic shape of a building.
This exploration opens up new possibilities for improving the effectiveness of 3D reconstruction techniques in the field of architectural design.
arXiv Detail & Related papers (2024-07-22T07:29:38Z) - GaussianDreamerPro: Text to Manipulable 3D Gaussians with Highly Enhanced Quality [99.63429416013713]
3D-GS has achieved great success in reconstructing and rendering real-world scenes.
To transfer the high rendering quality to generation tasks, a series of research works attempt to generate 3D-Gaussian assets from text.
We propose a novel framework named GaussianDreamerPro to enhance the generation quality.
arXiv Detail & Related papers (2024-06-26T16:12:09Z) - MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers [76.70891862458384]
We introduce MeshAnything, a model that treats mesh extraction as a generation problem.
By converting 3D assets in any 3D representation into AMs, MeshAnything can be integrated with various 3D asset production methods.
Our method generates AMs with hundreds of times fewer faces, significantly improving storage, rendering, and simulation efficiencies.
arXiv Detail & Related papers (2024-06-14T16:30:25Z) - CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets [43.315487682462845]
CLAY is a 3D geometry and material generator designed to transform human imagination into intricate 3D digital structures.
At its core is a large-scale generative model composed of a multi-resolution Variational Autoencoder (VAE) and a minimalistic latent Diffusion Transformer (DiT)
We demonstrate using CLAY for a range of controllable 3D asset creations, from sketchy conceptual designs to production ready assets with intricate details.
arXiv Detail & Related papers (2024-05-30T05:57:36Z) - SceneX: Procedural Controllable Large-scale Scene Generation [52.4743878200172]
We introduce SceneX, which can automatically produce high-quality procedural models according to designers' textual descriptions.
The proposed method comprises two components, PCGHub and PCGPlanner.
The latter aims to generate executable actions for Blender to produce controllable and precise 3D assets guided by the user's instructions.
arXiv Detail & Related papers (2024-03-23T03:23:29Z) - CG3D: Compositional Generation for Text-to-3D via Gaussian Splatting [57.14748263512924]
CG3D is a method for compositionally generating scalable 3D assets.
Gamma radiance fields, parameterized to allow for compositions of objects, possess the capability to enable semantically and physically consistent scenes.
arXiv Detail & Related papers (2023-11-29T18:55:38Z) - 3D-GPT: Procedural 3D Modeling with Large Language Models [47.72968643115063]
We introduce 3D-GPT, a framework utilizing large language models(LLMs) for instruction-driven 3D modeling.
3D-GPT positions LLMs as proficient problem solvers, dissecting the procedural 3D modeling tasks into accessible segments and appointing the apt agent for each task.
Our empirical investigations confirm that 3D-GPT not only interprets and executes instructions, delivering reliable results but also collaborates effectively with human designers.
arXiv Detail & Related papers (2023-10-19T17:41:48Z) - BuilDiff: 3D Building Shape Generation using Single-Image Conditional
Point Cloud Diffusion Models [15.953480573461519]
We propose a novel 3D building shape generation method exploiting point cloud diffusion models with image conditioning schemes.
We validate our framework on two newly built datasets and extensive experiments show that our method outperforms previous works in terms of building generation quality.
arXiv Detail & Related papers (2023-08-31T22:17:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.