DiffPattern: Layout Pattern Generation via Discrete Diffusion
- URL: http://arxiv.org/abs/2303.13060v1
- Date: Thu, 23 Mar 2023 06:16:14 GMT
- Title: DiffPattern: Layout Pattern Generation via Discrete Diffusion
- Authors: Zixiao Wang, Yunheng Shen, Wenqian Zhao, Yang Bai, Guojin Chen, Farzan
Farnia, Bei Yu
- Abstract summary: We propose toolDiffPattern to generate reliable layout patterns.
Our experiments on several benchmark settings show that toolDiffPattern significantly outperforms existing baselines.
- Score: 16.148506119712735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep generative models dominate the existing literature in layout pattern
generation. However, leaving the guarantee of legality to an inexplicable
neural network could be problematic in several applications. In this paper, we
propose \tool{DiffPattern} to generate reliable layout patterns.
\tool{DiffPattern} introduces a novel diverse topology generation method via a
discrete diffusion model with compute-efficiently lossless layout pattern
representation. Then a white-box pattern assessment is utilized to generate
legal patterns given desired design rules. Our experiments on several benchmark
settings show that \tool{DiffPattern} significantly outperforms existing
baselines and is capable of synthesizing reliable layout patterns.
Related papers
- PatternPaint: Generating Layout Patterns Using Generative AI and Inpainting Techniques [5.126358554705107]
Existing training-based ML pattern generation approaches struggle to produce legal layout patterns in the early stages of technology node development.
We propose PatternPaint, a training-free framework capable of generating legal patterns with limited DRC Clean training samples.
PatternPaint is the first framework to generate a complex 2D layout pattern library using only 20 design rule clean layout patterns as input.
arXiv Detail & Related papers (2024-09-02T16:02:26Z) - Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - Towards Aligned Layout Generation via Diffusion Model with Aesthetic Constraints [53.66698106829144]
We propose a unified model to handle a broad range of layout generation tasks.
The model is based on continuous diffusion models.
Experiment results show that LACE produces high-quality layouts.
arXiv Detail & Related papers (2024-02-07T11:12:41Z) - LayoutDiffusion: Improving Graphic Layout Generation by Discrete
Diffusion Probabilistic Models [50.73105631853759]
We present a novel generative model named LayoutDiffusion for automatic layout generation.
It learns to reverse a mild forward process, in which layouts become increasingly chaotic with the growth of forward steps.
It enables two conditional layout generation tasks in a plug-and-play manner without re-training and achieves better performance than existing methods.
arXiv Detail & Related papers (2023-03-21T04:41:02Z) - Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC [102.64648158034568]
diffusion models have quickly become the prevailing approach to generative modeling in many domains.
We propose an energy-based parameterization of diffusion models which enables the use of new compositional operators.
We find these samplers lead to notable improvements in compositional generation across a wide set of problems.
arXiv Detail & Related papers (2023-02-22T18:48:46Z) - DiffusER: Discrete Diffusion via Edit-based Reconstruction [88.62707047517914]
DiffusER is an edit-based generative model for text based on denoising diffusion models.
It can rival autoregressive models on several tasks spanning machine translation, summarization, and style transfer.
It can also perform other varieties of generation that standard autoregressive models are not well-suited for.
arXiv Detail & Related papers (2022-10-30T16:55:23Z) - Rewriting a Deep Generative Model [56.91974064348137]
We introduce a new problem setting: manipulation of specific rules encoded by a deep generative model.
We propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.
We present a user interface to enable users to interactively change the rules of a generative model to achieve desired effects.
arXiv Detail & Related papers (2020-07-30T17:58:16Z) - A Probabilistic Generative Model for Typographical Analysis of Early
Modern Printing [44.62884731273421]
We propose a deep and interpretable probabilistic generative model to analyze glyph shapes in printed Early Modern documents.
Our approach introduces a neural editor model that first generates well-understood printing perturbations from template parameters via interpertable latent variables.
We show that our approach outperforms rigid interpretable clustering baselines (Ocular) and overly-flexible deep generative models (VAE) alike on the task of completely unsupervised discovery of typefaces in mixed-font documents.
arXiv Detail & Related papers (2020-05-04T17:01:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.