Infinigen Indoors: Photorealistic Indoor Scenes using Procedural Generation
- URL: http://arxiv.org/abs/2406.11824v1
- Date: Mon, 17 Jun 2024 17:57:50 GMT
- Title: Infinigen Indoors: Photorealistic Indoor Scenes using Procedural Generation
- Authors: Alexander Raistrick, Lingjie Mei, Karhan Kayan, David Yan, Yiming Zuo, Beining Han, Hongyu Wen, Meenal Parakh, Stamatis Alexandropoulos, Lahav Lipson, Zeyu Ma, Jia Deng,
- Abstract summary: Infinigen Indoors is a procedural generator of photorealistic indoor scenes.
It builds upon the existing Infinigen system, which focuses on natural scenes.
- Score: 64.00495042910761
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Infinigen Indoors, a Blender-based procedural generator of photorealistic indoor scenes. It builds upon the existing Infinigen system, which focuses on natural scenes, but expands its coverage to indoor scenes by introducing a diverse library of procedural indoor assets, including furniture, architecture elements, appliances, and other day-to-day objects. It also introduces a constraint-based arrangement system, which consists of a domain-specific language for expressing diverse constraints on scene composition, and a solver that generates scene compositions that maximally satisfy the constraints. We provide an export tool that allows the generated 3D objects and scenes to be directly used for training embodied agents in real-time simulators such as Omniverse and Unreal. Infinigen Indoors is open-sourced under the BSD license. Please visit https://infinigen.org for code and videos.
Related papers
- SceneCraft: Layout-Guided 3D Scene Generation [29.713491313796084]
SceneCraft is a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences.
Our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality.
arXiv Detail & Related papers (2024-10-11T17:59:58Z) - Disentangled 3D Scene Generation with Layout Learning [109.03233745767062]
We introduce a method to generate 3D scenes that are disentangled into their component objects.
Our key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene.
We show that despite its simplicity, our approach successfully generates 3D scenes into individual objects.
arXiv Detail & Related papers (2024-02-26T18:54:15Z) - Style-Consistent 3D Indoor Scene Synthesis with Decoupled Objects [84.45345829270626]
Controllable 3D indoor scene synthesis stands at the forefront of technological progress.
Current methods for scene stylization are limited to applying styles to the entire scene.
We introduce a unique pipeline designed for synthesis 3D indoor scenes.
arXiv Detail & Related papers (2024-01-24T03:10:36Z) - FurniScene: A Large-scale 3D Room Dataset with Intricate Furnishing Scenes [57.47534091528937]
FurniScene is a large-scale 3D room dataset with intricate furnishing scenes from interior design professionals.
Specifically, the FurniScene consists of 11,698 rooms and 39,691 unique furniture CAD models with 89 different types.
To better suit fine-grained indoor scene layout generation, we introduce a novel Two-Stage Diffusion Scene Model (TSDSM)
arXiv Detail & Related papers (2024-01-07T12:34:45Z) - Scene-Conditional 3D Object Stylization and Composition [30.120066605881448]
3D generative models have made impressive progress, enabling the generation of almost arbitrary 3D assets from text or image inputs.
We propose a framework that allows for the stylization of an existing 3D asset to fit into a given 2D scene, and additionally produce a photorealistic composition as if the asset was placed within the environment.
arXiv Detail & Related papers (2023-12-19T18:50:33Z) - Infinite Photorealistic Worlds using Procedural Generation [135.10236145573043]
Infinigen is a procedural generator of photorealistic 3D scenes of the natural world.
Every asset, from shape to texture, is generated from scratch via randomized mathematical rules.
arXiv Detail & Related papers (2023-06-15T17:46:16Z) - DisCoScene: Spatially Disentangled Generative Radiance Fields for
Controllable 3D-aware Scene Synthesis [90.32352050266104]
DisCoScene is a 3Daware generative model for high-quality and controllable scene synthesis.
It disentangles the whole scene into object-centric generative fields by learning on only 2D images with the global-local discrimination.
We demonstrate state-of-the-art performance on many scene datasets, including the challenging outdoor dataset.
arXiv Detail & Related papers (2022-12-22T18:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.