Scene-Conditional 3D Object Stylization and Composition
- URL: http://arxiv.org/abs/2312.12419v1
- Date: Tue, 19 Dec 2023 18:50:33 GMT
- Title: Scene-Conditional 3D Object Stylization and Composition
- Authors: Jinghao Zhou, Tomas Jakab, Philip Torr, Christian Rupprecht
- Abstract summary: 3D generative models have made impressive progress, enabling the generation of almost arbitrary 3D assets from text or image inputs.
We propose a framework that allows for the stylization of an existing 3D asset to fit into a given 2D scene, and additionally produce a photorealistic composition as if the asset was placed within the environment.
- Score: 30.120066605881448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, 3D generative models have made impressive progress, enabling the
generation of almost arbitrary 3D assets from text or image inputs. However,
these approaches generate objects in isolation without any consideration for
the scene where they will eventually be placed. In this paper, we propose a
framework that allows for the stylization of an existing 3D asset to fit into a
given 2D scene, and additionally produce a photorealistic composition as if the
asset was placed within the environment. This not only opens up a new level of
control for object stylization, for example, the same assets can be stylized to
reflect changes in the environment, such as summer to winter or fantasy versus
futuristic settings-but also makes the object-scene composition more
controllable. We achieve this by combining modeling and optimizing the object's
texture and environmental lighting through differentiable ray tracing with
image priors from pre-trained text-to-image diffusion models. We demonstrate
that our method is applicable to a wide variety of indoor and outdoor scenes
and arbitrary objects.
Related papers
- Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting [47.014044892025346]
Architect is a generative framework that creates complex and realistic 3D embodied environments leveraging diffusion-based 2D image inpainting.
Our pipeline is further extended to a hierarchical and iterative inpainting process to continuously generate placement of large furniture and small objects to enrich the scene.
arXiv Detail & Related papers (2024-11-14T22:15:48Z) - Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - Neural Assets: 3D-Aware Multi-Object Scene Synthesis with Image Diffusion Models [32.51506331929564]
We propose to use a set of per-object representations, Neural Assets, to control the 3D pose of individual objects in a scene.
Our model achieves state-of-the-art multi-object editing results on both synthetic 3D scene datasets, as well as two real-world video datasets.
arXiv Detail & Related papers (2024-06-13T16:29:18Z) - Disentangled 3D Scene Generation with Layout Learning [109.03233745767062]
We introduce a method to generate 3D scenes that are disentangled into their component objects.
Our key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene.
We show that despite its simplicity, our approach successfully generates 3D scenes into individual objects.
arXiv Detail & Related papers (2024-02-26T18:54:15Z) - Style-Consistent 3D Indoor Scene Synthesis with Decoupled Objects [84.45345829270626]
Controllable 3D indoor scene synthesis stands at the forefront of technological progress.
Current methods for scene stylization are limited to applying styles to the entire scene.
We introduce a unique pipeline designed for synthesis 3D indoor scenes.
arXiv Detail & Related papers (2024-01-24T03:10:36Z) - SceneWiz3D: Towards Text-guided 3D Scene Composition [134.71933134180782]
Existing approaches either leverage large text-to-image models to optimize a 3D representation or train 3D generators on object-centric datasets.
We introduce SceneWiz3D, a novel approach to synthesize high-fidelity 3D scenes from text.
arXiv Detail & Related papers (2023-12-13T18:59:30Z) - CG3D: Compositional Generation for Text-to-3D via Gaussian Splatting [57.14748263512924]
CG3D is a method for compositionally generating scalable 3D assets.
Gamma radiance fields, parameterized to allow for compositions of objects, possess the capability to enable semantically and physically consistent scenes.
arXiv Detail & Related papers (2023-11-29T18:55:38Z) - Blended-NeRF: Zero-Shot Object Generation and Blending in Existing
Neural Radiance Fields [26.85599376826124]
We present Blended-NeRF, a framework for editing a specific region of interest in an existing NeRF scene.
We allow local editing by localizing a 3D ROI box in the input scene, and blend the content synthesized inside the ROI with the existing scene.
We show our framework for several 3D editing applications, including adding new objects to a scene, removing/altering existing objects, and texture conversion.
arXiv Detail & Related papers (2023-06-22T09:34:55Z) - PhotoScene: Photorealistic Material and Lighting Transfer for Indoor
Scenes [84.66946637534089]
PhotoScene is a framework that takes input image(s) of a scene and builds a photorealistic digital twin with high-quality materials and similar lighting.
We model scene materials using procedural material graphs; such graphs represent photorealistic and resolution-independent materials.
We evaluate our technique on objects and layout reconstructions from ScanNet, SUN RGB-D and stock photographs, and demonstrate that our method reconstructs high-quality, fully relightable 3D scenes.
arXiv Detail & Related papers (2022-07-02T06:52:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.