DiffPattern: Layout Pattern Generation via Discrete Diffusion
- URL: http://arxiv.org/abs/2303.13060v1
- Date: Thu, 23 Mar 2023 06:16:14 GMT
- Title: DiffPattern: Layout Pattern Generation via Discrete Diffusion
- Authors: Zixiao Wang, Yunheng Shen, Wenqian Zhao, Yang Bai, Guojin Chen, Farzan
Farnia, Bei Yu
- Abstract summary: We propose toolDiffPattern to generate reliable layout patterns.
Our experiments on several benchmark settings show that toolDiffPattern significantly outperforms existing baselines.
- Score: 16.148506119712735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep generative models dominate the existing literature in layout pattern
generation. However, leaving the guarantee of legality to an inexplicable
neural network could be problematic in several applications. In this paper, we
propose \tool{DiffPattern} to generate reliable layout patterns.
\tool{DiffPattern} introduces a novel diverse topology generation method via a
discrete diffusion model with compute-efficiently lossless layout pattern
representation. Then a white-box pattern assessment is utilized to generate
legal patterns given desired design rules. Our experiments on several benchmark
settings show that \tool{DiffPattern} significantly outperforms existing
baselines and is capable of synthesizing reliable layout patterns.
Related papers
- Identification of Novel Modes in Generative Models via Fourier-based Differential Clustering [33.22153760327227]
We propose a method called Fourier-based Identification of Novel Clusters (FINC) to identify modes produced by a generative model with a higher frequency.
We demonstrate the application of FINC to large-scale computer vision datasets and generative model frameworks.
arXiv Detail & Related papers (2024-05-04T16:06:50Z) - Towards Aligned Layout Generation via Diffusion Model with Aesthetic Constraints [53.66698106829144]
We propose a unified model to handle a broad range of layout generation tasks.
The model is based on continuous diffusion models.
Experiment results show that LACE produces high-quality layouts.
arXiv Detail & Related papers (2024-02-07T11:12:41Z) - LayoutDiffusion: Improving Graphic Layout Generation by Discrete
Diffusion Probabilistic Models [50.73105631853759]
We present a novel generative model named LayoutDiffusion for automatic layout generation.
It learns to reverse a mild forward process, in which layouts become increasingly chaotic with the growth of forward steps.
It enables two conditional layout generation tasks in a plug-and-play manner without re-training and achieves better performance than existing methods.
arXiv Detail & Related papers (2023-03-21T04:41:02Z) - Reduce, Reuse, Recycle: Compositional Generation with Energy-Based
Diffusion Models and MCMC [106.06185677214353]
diffusion models have quickly become the prevailing approach to generative modeling in many domains.
We propose an energy-based parameterization of diffusion models which enables the use of new compositional operators.
We find these samplers lead to notable improvements in compositional generation across a wide set of problems.
arXiv Detail & Related papers (2023-02-22T18:48:46Z) - DiffusER: Discrete Diffusion via Edit-based Reconstruction [88.62707047517914]
DiffusER is an edit-based generative model for text based on denoising diffusion models.
It can rival autoregressive models on several tasks spanning machine translation, summarization, and style transfer.
It can also perform other varieties of generation that standard autoregressive models are not well-suited for.
arXiv Detail & Related papers (2022-10-30T16:55:23Z) - Diverse Semantic Image Synthesis via Probability Distribution Modeling [103.88931623488088]
We propose a novel diverse semantic image synthesis framework.
Our method can achieve superior diversity and comparable quality compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-03-11T18:59:25Z) - Rewriting a Deep Generative Model [56.91974064348137]
We introduce a new problem setting: manipulation of specific rules encoded by a deep generative model.
We propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.
We present a user interface to enable users to interactively change the rules of a generative model to achieve desired effects.
arXiv Detail & Related papers (2020-07-30T17:58:16Z) - A Probabilistic Generative Model for Typographical Analysis of Early
Modern Printing [44.62884731273421]
We propose a deep and interpretable probabilistic generative model to analyze glyph shapes in printed Early Modern documents.
Our approach introduces a neural editor model that first generates well-understood printing perturbations from template parameters via interpertable latent variables.
We show that our approach outperforms rigid interpretable clustering baselines (Ocular) and overly-flexible deep generative models (VAE) alike on the task of completely unsupervised discovery of typefaces in mixed-font documents.
arXiv Detail & Related papers (2020-05-04T17:01:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.