Neural Scene Decoration from a Single Photograph
- URL: http://arxiv.org/abs/2108.01806v1
- Date: Wed, 4 Aug 2021 01:44:21 GMT
- Title: Neural Scene Decoration from a Single Photograph
- Authors: Hong-Wing Pang, Yingshu Chen, Binh-Son Hua, Sai-Kit Yeung
- Abstract summary: We introduce a new problem of domain-specific image synthesis using generative modeling, namely neural scene decoration.
Given a photograph of an empty indoor space, we aim to synthesize a new image of the same space that is fully furnished and decorated.
Our network contains a novel image generator that transforms an initial point-based object layout into a realistic photograph.
- Score: 24.794743085391953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Furnishing and rendering an indoor scene is a common but tedious task for
interior design: an artist needs to observe the space, create a conceptual
design, build a 3D model, and perform rendering. In this paper, we introduce a
new problem of domain-specific image synthesis using generative modeling,
namely neural scene decoration. Given a photograph of an empty indoor space, we
aim to synthesize a new image of the same space that is fully furnished and
decorated. Neural scene decoration can be applied in practice to efficiently
generate conceptual but realistic interior designs, bypassing the traditional
multi-step and time-consuming pipeline. Our attempt to neural scene decoration
in this paper is a generative adversarial neural network that takes the input
photograph and directly produce the image of the desired furnishing and
decorations. Our network contains a novel image generator that transforms an
initial point-based object layout into a realistic photograph. We demonstrate
the performance of our proposed method by showing that it outperforms the
baselines built upon previous works on image translations both qualitatively
and quantitatively. Our user study further validates the plausibility and
aesthetics in the generated designs.
Related papers
- I-Design: Personalized LLM Interior Designer [57.00412237555167]
I-Design is a personalized interior designer that allows users to generate and visualize their design goals through natural language communication.
I-Design starts with a team of large language model agents that engage in dialogues and logical reasoning with one another.
The final design is then constructed in 3D by retrieving and integrating assets from an existing object database.
arXiv Detail & Related papers (2024-04-03T16:17:53Z) - Unveiling Spaces: Architecturally meaningful semantic descriptions from
images of interior spaces [0.0]
This project aims to tackle the problem of extracting architecturally meaningful semantic descriptions from two-dimensional scenes of populated interior spaces.
A Generative Adversarial Network (GAN) for image-to-image translation (Pix2Pix) is trained on synthetically generated rendered images of these enclosures, along with corresponding image abstractions representing high-level architectural structure.
A similar model evaluation is also carried out on photographs of existing indoor enclosures, to measure its performance in real-world settings.
arXiv Detail & Related papers (2023-12-19T16:03:04Z) - SparseGNV: Generating Novel Views of Indoor Scenes with Sparse Input
Views [16.72880076920758]
We present SparseGNV, a learning framework that incorporates 3D structures and image generative models to generate novel views.
SparseGNV is trained across a large indoor scene dataset to learn generalizable priors.
It can efficiently generate novel views of an unseen indoor scene in a feed-forward manner.
arXiv Detail & Related papers (2023-05-11T17:58:37Z) - Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint
Rendering for the Closed Scene Composed of Pre-Captured Objects [40.59508249969956]
We present a novel solution to mimic such human perception capability based on a new paradigm of amodal 3D scene understanding with neural rendering for a closed scene.
We first learn the prior knowledge of the objects in a closed scene via an offline stage, which facilitates an online stage to understand the room with unseen furniture arrangement.
During the online stage, given a panoramic image of the scene in different layouts, we utilize a holistic neural-rendering-based optimization framework to efficiently estimate the correct 3D scene layout and deliver realistic free-viewpoint rendering.
arXiv Detail & Related papers (2022-05-05T15:34:09Z) - EgoRenderer: Rendering Human Avatars from Egocentric Camera Images [87.96474006263692]
We present EgoRenderer, a system for rendering full-body neural avatars of a person captured by a wearable, egocentric fisheye camera.
Rendering full-body avatars from such egocentric images come with unique challenges due to the top-down view and large distortions.
We tackle these challenges by decomposing the rendering process into several steps, including texture synthesis, pose construction, and neural image translation.
arXiv Detail & Related papers (2021-11-24T18:33:02Z) - Advances in Neural Rendering [115.05042097988768]
This report focuses on methods that combine classical rendering with learned 3D scene representations.
A key advantage of these methods is that they are 3D-consistent by design, enabling applications such as novel viewpoint of a captured scene.
In addition to methods that handle static scenes, we cover neural scene representations for modeling non-rigidly deforming objects.
arXiv Detail & Related papers (2021-11-10T18:57:01Z) - Learned Spatial Representations for Few-shot Talking-Head Synthesis [68.3787368024951]
We propose a novel approach for few-shot talking-head synthesis.
We show that this disentangled representation leads to a significant improvement over previous methods.
arXiv Detail & Related papers (2021-04-29T17:59:42Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - CONFIG: Controllable Neural Face Image Generation [10.443563719622645]
ConfigNet is a neural face model that allows for controlling individual aspects of output images in meaningful ways.
Our novel method uses synthetic data to factorize the latent space into elements that correspond to the inputs of a traditional rendering pipeline.
arXiv Detail & Related papers (2020-05-06T09:19:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.