Image-Driven Furniture Style for Interactive 3D Scene Modeling
- URL: http://arxiv.org/abs/2010.10557v1
- Date: Tue, 20 Oct 2020 18:19:28 GMT
- Title: Image-Driven Furniture Style for Interactive 3D Scene Modeling
- Authors: Tomer Weiss, Ilkay Yildiz, Nitin Agarwal, Esra Ataer-Cansizoglu,
Jae-Woo Choi
- Abstract summary: Interior style follows rules involving color, geometry and other visual elements.
We propose a method for fast-tracking style-similarity tasks, by learning a furniture's style-compatibility from interior scene images.
We demonstrate our method with several 3D model style-compatibility results, and with an interactive system for modeling style-consistent scenes.
- Score: 8.8561720398658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating realistic styled spaces is a complex task, which involves design
know-how for what furniture pieces go well together. Interior style follows
abstract rules involving color, geometry and other visual elements. Following
such rules, users manually select similar-style items from large repositories
of 3D furniture models, a process which is both laborious and time-consuming.
We propose a method for fast-tracking style-similarity tasks, by learning a
furniture's style-compatibility from interior scene images. Such images contain
more style information than images depicting single furniture. To understand
style, we train a deep learning network on a classification task. Based on
image embeddings extracted from our network, we measure stylistic compatibility
of furniture. We demonstrate our method with several 3D model
style-compatibility results, and with an interactive system for modeling
style-consistent scenes.
Related papers
- MRStyle: A Unified Framework for Color Style Transfer with Multi-Modality Reference [32.64957647390327]
We introduce MRStyle, a framework that enables color style transfer using multi-modality reference, including image and text.
For text reference, we align the text feature of stable diffusion priors with the style feature of our IRStyle to perform text-guided color style transfer (TRStyle)
Our TRStyle method is highly efficient in both training and inference, producing notable open-set text-guided transfer results.
arXiv Detail & Related papers (2024-09-09T00:01:48Z) - Implicit Style-Content Separation using B-LoRA [61.664293840163865]
We introduce B-LoRA, a method that implicitly separate the style and content components of a single image.
By analyzing the architecture of SDXL combined with LoRA, we find that jointly learning the LoRA weights of two specific blocks achieves style-content separation.
arXiv Detail & Related papers (2024-03-21T17:20:21Z) - Style-Consistent 3D Indoor Scene Synthesis with Decoupled Objects [84.45345829270626]
Controllable 3D indoor scene synthesis stands at the forefront of technological progress.
Current methods for scene stylization are limited to applying styles to the entire scene.
We introduce a unique pipeline designed for synthesis 3D indoor scenes.
arXiv Detail & Related papers (2024-01-24T03:10:36Z) - RoomDesigner: Encoding Anchor-latents for Style-consistent and
Shape-compatible Indoor Scene Generation [26.906174238830474]
Indoor scene generation aims at creating shape-compatible, style-consistent furniture arrangements within a spatially reasonable layout.
We propose a two-stage model integrating shape priors into the indoor scene generation by encoding furniture as anchor latent representations.
arXiv Detail & Related papers (2023-10-16T03:05:19Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - CLIP-Layout: Style-Consistent Indoor Scene Synthesis with Semantic
Furniture Embedding [17.053844262654223]
Indoor scene synthesis involves automatically picking and placing furniture appropriately on a floor plan.
This paper introduces an auto-regressive scene model which can output instance-level predictions.
Our model achieves SOTA results in scene synthesis and improves auto-completion metrics by over 50%.
arXiv Detail & Related papers (2023-03-07T00:26:02Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - BlobGAN: Spatially Disentangled Scene Representations [67.60387150586375]
We propose an unsupervised, mid-level representation for a generative model of scenes.
The representation is mid-level in that it is neither per-pixel nor per-image; rather, scenes are modeled as a collection of spatial, depth-ordered "blobs" of features.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z) - 3D Photo Stylization: Learning to Generate Stylized Novel Views from a
Single Image [26.71747401875526]
Style transfer and single-image 3D photography as two representative tasks have so far evolved independently.
We propose a deep model that learns geometry-aware content features for stylization from a point cloud representation of the scene.
We demonstrate the superiority of our method via extensive qualitative and quantitative studies.
arXiv Detail & Related papers (2021-11-30T23:27:10Z) - SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware
Inpainting [54.419266357283966]
Single image 3D photography enables viewers to view a still image from novel viewpoints.
Recent approaches combine monocular depth networks with inpainting networks to achieve compelling results.
We present SLIDE, a modular and unified system for single image 3D photography.
arXiv Detail & Related papers (2021-09-02T16:37:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.