Image-Driven Furniture Style for Interactive 3D Scene Modeling
- URL: http://arxiv.org/abs/2010.10557v1
- Date: Tue, 20 Oct 2020 18:19:28 GMT
- Title: Image-Driven Furniture Style for Interactive 3D Scene Modeling
- Authors: Tomer Weiss, Ilkay Yildiz, Nitin Agarwal, Esra Ataer-Cansizoglu,
Jae-Woo Choi
- Abstract summary: Interior style follows rules involving color, geometry and other visual elements.
We propose a method for fast-tracking style-similarity tasks, by learning a furniture's style-compatibility from interior scene images.
We demonstrate our method with several 3D model style-compatibility results, and with an interactive system for modeling style-consistent scenes.
- Score: 8.8561720398658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating realistic styled spaces is a complex task, which involves design
know-how for what furniture pieces go well together. Interior style follows
abstract rules involving color, geometry and other visual elements. Following
such rules, users manually select similar-style items from large repositories
of 3D furniture models, a process which is both laborious and time-consuming.
We propose a method for fast-tracking style-similarity tasks, by learning a
furniture's style-compatibility from interior scene images. Such images contain
more style information than images depicting single furniture. To understand
style, we train a deep learning network on a classification task. Based on
image embeddings extracted from our network, we measure stylistic compatibility
of furniture. We demonstrate our method with several 3D model
style-compatibility results, and with an interactive system for modeling
style-consistent scenes.
Related papers
- ReStyle3D: Scene-Level Appearance Transfer with Semantic Correspondences [33.06053818091165]
ReStyle3D is a framework for scene-level appearance transfer from a single style image to a real-world scene represented by multiple views.
It combines explicit semantic correspondences with multi-view consistency to achieve precise and coherent stylization.
Our code, pretrained models, and dataset will be publicly released to support new applications in interior design, virtual staging, and 3D-consistent stylization.
arXiv Detail & Related papers (2025-02-14T18:54:21Z) - Style3D: Attention-guided Multi-view Style Transfer for 3D Object Generation [9.212876623996475]
Style3D is a novel approach for generating stylized 3D objects from a content image and a style image.
By establishing an interplay between structural and stylistic features across multiple views, our approach enables a holistic 3D stylization process.
arXiv Detail & Related papers (2024-12-04T18:59:38Z) - Implicit Style-Content Separation using B-LoRA [61.664293840163865]
We introduce B-LoRA, a method that implicitly separate the style and content components of a single image.
By analyzing the architecture of SDXL combined with LoRA, we find that jointly learning the LoRA weights of two specific blocks achieves style-content separation.
arXiv Detail & Related papers (2024-03-21T17:20:21Z) - Style-Consistent 3D Indoor Scene Synthesis with Decoupled Objects [84.45345829270626]
Controllable 3D indoor scene synthesis stands at the forefront of technological progress.
Current methods for scene stylization are limited to applying styles to the entire scene.
We introduce a unique pipeline designed for synthesis 3D indoor scenes.
arXiv Detail & Related papers (2024-01-24T03:10:36Z) - RoomDesigner: Encoding Anchor-latents for Style-consistent and
Shape-compatible Indoor Scene Generation [26.906174238830474]
Indoor scene generation aims at creating shape-compatible, style-consistent furniture arrangements within a spatially reasonable layout.
We propose a two-stage model integrating shape priors into the indoor scene generation by encoding furniture as anchor latent representations.
arXiv Detail & Related papers (2023-10-16T03:05:19Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - CLIP-Layout: Style-Consistent Indoor Scene Synthesis with Semantic
Furniture Embedding [17.053844262654223]
Indoor scene synthesis involves automatically picking and placing furniture appropriately on a floor plan.
This paper introduces an auto-regressive scene model which can output instance-level predictions.
Our model achieves SOTA results in scene synthesis and improves auto-completion metrics by over 50%.
arXiv Detail & Related papers (2023-03-07T00:26:02Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - BlobGAN: Spatially Disentangled Scene Representations [67.60387150586375]
We propose an unsupervised, mid-level representation for a generative model of scenes.
The representation is mid-level in that it is neither per-pixel nor per-image; rather, scenes are modeled as a collection of spatial, depth-ordered "blobs" of features.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z) - SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware
Inpainting [54.419266357283966]
Single image 3D photography enables viewers to view a still image from novel viewpoints.
Recent approaches combine monocular depth networks with inpainting networks to achieve compelling results.
We present SLIDE, a modular and unified system for single image 3D photography.
arXiv Detail & Related papers (2021-09-02T16:37:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.