COFS: Controllable Furniture layout Synthesis
- URL: http://arxiv.org/abs/2205.14657v1
- Date: Sun, 29 May 2022 13:31:18 GMT
- Title: COFS: Controllable Furniture layout Synthesis
- Authors: Wamiq Reyaz Para, Paul Guerrero, Niloy Mitra, Peter Wonka
- Abstract summary: Many existing methods tackle this problem as a sequence generation problem which imposes a specific ordering on the elements of the layout.
We propose COFS, an architecture based on standard transformer architecture blocks from language modeling.
Our model consistently outperforms other methods which we verify by performing quantitative evaluations.
- Score: 40.68096097121981
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scalable generation of furniture layouts is essential for many applications
in virtual reality, augmented reality, game development and synthetic data
generation. Many existing methods tackle this problem as a sequence generation
problem which imposes a specific ordering on the elements of the layout making
such methods impractical for interactive editing or scene completion.
Additionally, most methods focus on generating layouts unconditionally and
offer minimal control over the generated layouts. We propose COFS, an
architecture based on standard transformer architecture blocks from language
modeling. The proposed model is invariant to object order by design, removing
the unnatural requirement of specifying an object generation order.
Furthermore, the model allows for user interaction at multiple levels enabling
fine grained control over the generation process. Our model consistently
outperforms other methods which we verify by performing quantitative
evaluations. Our method is also faster to train and sample from, compared to
existing methods.
Related papers
- Generating Compositional Scenes via Text-to-image RGBA Instance Generation [82.63805151691024]
Text-to-image diffusion generative models can generate high quality images at the cost of tedious prompt engineering.
We propose a novel multi-stage generation paradigm that is designed for fine-grained control, flexibility and interactivity.
Our experiments show that our RGBA diffusion model is capable of generating diverse and high quality instances with precise control over object attributes.
arXiv Detail & Related papers (2024-11-16T23:44:14Z) - HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation [11.087309945227826]
We propose a textbfHierarchical textbfControllable (HiCo) diffusion model for layout-to-image generation.
Our key insight is to achieve spatial disentanglement through hierarchical modeling of layouts.
To evaluate the performance of multi-objective controllable layout generation in natural scenes, we introduce the HiCo-7K benchmark.
arXiv Detail & Related papers (2024-10-18T09:36:10Z) - PosterLLaVa: Constructing a Unified Multi-modal Layout Generator with LLM [58.67882997399021]
Our research introduces a unified framework for automated graphic layout generation.
Our data-driven method employs structured text (JSON format) and visual instruction tuning to generate layouts.
We conduct extensive experiments and achieved state-of-the-art (SOTA) performance on public multi-modal layout generation benchmarks.
arXiv Detail & Related papers (2024-06-05T03:05:52Z) - Constrained Layout Generation with Factor Graphs [21.07236104467961]
We introduce a factor graph based approach with four latent variable nodes for each room, and a factor node for each constraint.
The factor nodes represent dependencies among the variables to which they are connected, effectively capturing constraints that are potentially of a higher order.
Our approach is simple and generates layouts faithful to the user requirements, demonstrated by a large improvement in IOU scores over existing methods.
arXiv Detail & Related papers (2024-03-30T14:58:40Z) - Towards Aligned Layout Generation via Diffusion Model with Aesthetic Constraints [53.66698106829144]
We propose a unified model to handle a broad range of layout generation tasks.
The model is based on continuous diffusion models.
Experiment results show that LACE produces high-quality layouts.
arXiv Detail & Related papers (2024-02-07T11:12:41Z) - LayoutDiffusion: Improving Graphic Layout Generation by Discrete
Diffusion Probabilistic Models [50.73105631853759]
We present a novel generative model named LayoutDiffusion for automatic layout generation.
It learns to reverse a mild forward process, in which layouts become increasingly chaotic with the growth of forward steps.
It enables two conditional layout generation tasks in a plug-and-play manner without re-training and achieves better performance than existing methods.
arXiv Detail & Related papers (2023-03-21T04:41:02Z) - LayoutDM: Discrete Diffusion Model for Controllable Layout Generation [27.955214767628107]
Controllable layout generation aims at synthesizing plausible arrangement of element bounding boxes with optional constraints.
In this work, we try to solve a broad range of layout generation tasks in a single model that is based on discrete state-space diffusion models.
Our model, named LayoutDM, naturally handles the structured layout data in the discrete representation and learns to progressively infer a noiseless layout from the initial input.
arXiv Detail & Related papers (2023-03-14T17:59:47Z) - Rewriting Geometric Rules of a GAN [32.22250082294461]
Current machine learning approaches miss a key element of the creative process -- the ability to synthesize things that go far beyond the data distribution and everyday experience.
We enable a user to "warp" a given model by editing just a handful of original model outputs with desired geometric changes.
Our method allows a user to create a model that synthesizes endless objects with defined geometric changes, enabling the creation of a new generative model without the burden of curating a large-scale dataset.
arXiv Detail & Related papers (2022-07-28T17:59:36Z) - Constrained Graphic Layout Generation via Latent Optimization [17.05026043385661]
We generate graphic layouts that can flexibly incorporate design semantics, either specified implicitly or explicitly by a user.
Our approach builds on a generative layout model based on a Transformer architecture, and formulates the layout generation as a constrained optimization problem.
We show in the experiments that our approach is capable of generating realistic layouts in both constrained and unconstrained generation tasks with a single model.
arXiv Detail & Related papers (2021-08-02T13:04:11Z) - IOT: Instance-wise Layer Reordering for Transformer Structures [173.39918590438245]
We break the assumption of the fixed layer order in the Transformer and introduce instance-wise layer reordering into the model structure.
Our method can also be applied to other architectures beyond Transformer.
arXiv Detail & Related papers (2021-03-05T03:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.