Plan2Scene: Converting Floorplans to 3D Scenes
- URL: http://arxiv.org/abs/2106.05375v1
- Date: Wed, 9 Jun 2021 20:32:20 GMT
- Title: Plan2Scene: Converting Floorplans to 3D Scenes
- Authors: Madhawa Vidanapathirana, Qirui Wu, Yasutaka Furukawa, Angel X. Chang
and Manolis Savva
- Abstract summary: We address the task of converting a floorplan and a set of associated photos of a residence into a textured 3D mesh model.
Our system 1) lifts a floorplan image to a 3D mesh model; 2) synthesizes surface textures based on the input photos; and 3) infers textures for unobserved surfaces using a graph neural network architecture.
- Score: 36.34298107648571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the task of converting a floorplan and a set of associated photos
of a residence into a textured 3D mesh model, a task which we call Plan2Scene.
Our system 1) lifts a floorplan image to a 3D mesh model; 2) synthesizes
surface textures based on the input photos; and 3) infers textures for
unobserved surfaces using a graph neural network architecture. To train and
evaluate our system we create indoor surface texture datasets, and augment a
dataset of floorplans and photos from prior work with rectified surface crops
and additional annotations. Our approach handles the challenge of producing
tileable textures for dominant surfaces such as floors, walls, and ceilings
from a sparse set of unaligned photos that only partially cover the residence.
Qualitative and quantitative evaluations show that our system produces
realistic 3D interior models, outperforming baseline approaches on a suite of
texture quality metrics and as measured by a holistic user study.
Related papers
- TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - 3D-TexSeg: Unsupervised Segmentation of 3D Texture using Mutual
Transformer Learning [11.510823733292519]
This paper presents an original framework for the unsupervised segmentation of the 3D texture on the mesh manifold.
We devise a mutual transformer-based system comprising a label generator and a cleaner.
Experiments on three publicly available datasets with diverse texture patterns demonstrate that the proposed framework outperforms standard and SOTA unsupervised techniques.
arXiv Detail & Related papers (2023-11-17T17:13:14Z) - TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models [13.248386665044087]
We present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy.
Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort.
arXiv Detail & Related papers (2023-09-20T12:33:53Z) - Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models [21.622420436349245]
We present Text2Room, a method for generating room-scale textured 3D meshes from a given text prompt as input.
We leverage pre-trained 2D text-to-image models to synthesize a sequence of images from different poses.
In order to lift these outputs into a consistent 3D scene representation, we combine monocular depth estimation with a text-conditioned inpainting model.
arXiv Detail & Related papers (2023-03-21T16:21:02Z) - PhotoScene: Photorealistic Material and Lighting Transfer for Indoor
Scenes [84.66946637534089]
PhotoScene is a framework that takes input image(s) of a scene and builds a photorealistic digital twin with high-quality materials and similar lighting.
We model scene materials using procedural material graphs; such graphs represent photorealistic and resolution-independent materials.
We evaluate our technique on objects and layout reconstructions from ScanNet, SUN RGB-D and stock photographs, and demonstrate that our method reconstructs high-quality, fully relightable 3D scenes.
arXiv Detail & Related papers (2022-07-02T06:52:44Z) - EigenFairing: 3D Model Fairing using Image Coherence [0.884755712094096]
A surface is often modeled as a triangulated mesh of 3D points and textures associated with faces of the mesh.
When the points do not lie at critical points of maximum curvature or discontinuities of the real surface, faces of the mesh do not lie close to the modeled surface.
This paper presents a technique for perfecting the 3D surface model by repositioning its vertices so that it is coherent with a set of observed images of the object.
arXiv Detail & Related papers (2022-06-10T18:13:19Z) - Texturify: Generating Textures on 3D Shape Surfaces [34.726179801982646]
We propose Texturify to learn a 3D shape that predicts texture on the 3D input.
Our method does not require any 3D color supervision to learn 3D objects.
arXiv Detail & Related papers (2022-04-05T18:00:04Z) - NeuTex: Neural Texture Mapping for Volumetric Neural Rendering [48.83181790635772]
We present an approach that explicitly disentangles geometry--represented as a continuous 3D volume--from appearance--represented as a continuous 2D texture map.
We demonstrate that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.
arXiv Detail & Related papers (2021-03-01T05:34:51Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.