PatternPaint: Generating Layout Patterns Using Generative AI and Inpainting Techniques
- URL: http://arxiv.org/abs/2409.01348v2
- Date: Fri, 25 Oct 2024 23:24:03 GMT
- Title: PatternPaint: Generating Layout Patterns Using Generative AI and Inpainting Techniques
- Authors: Guanglei Zhou, Bhargav Korrapati, Gaurav Rajavendra Reddy, Jiang Hu, Yiran Chen, Dipto G. Thakurta,
- Abstract summary: Existing training-based ML pattern generation approaches struggle to produce legal layout patterns in the early stages of technology node development.
We propose PatternPaint, a training-free framework capable of generating legal patterns with limited DRC Clean training samples.
PatternPaint is the first framework to generate a complex 2D layout pattern library using only 20 design rule clean layout patterns as input.
- Score: 5.126358554705107
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Generation of diverse VLSI layout patterns is crucial for various downstream tasks in design for manufacturing (DFM) studies. However, the lengthy design cycles often hinder the creation of a comprehensive layout pattern library, and new detrimental patterns may be discovered late in the product development process. Existing training-based ML pattern generation approaches struggle to produce legal layout patterns in the early stages of technology node development due to the limited availability of training samples.To address this challenge, we propose PatternPaint, a training-free framework capable of generating legal patterns with limited DRC Clean training samples. PatternPaint simplifies complex layout pattern generation into a series of inpainting processes with a template-based denoising scheme. Our framework enables even a general pre-trained image foundation model (stable-diffusion), to generate valuable pattern variations, thereby enhancing the library. Notably, PatternPaint can operate with any input size. Furthermore, we explore fine-tuning a pre-trained model with VLSI layout images, resulting in a 2x generation efficiency compared to the base model. Our results show that the proposed model can generate legal patterns in complex 2D metal interconnect design rule settings and achieves a high diversity score. The designed system, with its flexible settings, supports pattern generation with localized changes and design rule violation correction. Validated on a sub-3nm technology node (Intel 18A), PatternPaint is the first framework to generate a complex 2D layout pattern library using only 20 design rule clean layout patterns as input.
Related papers
- DMM: Building a Versatile Image Generation Model via Distillation-Based Model Merging [32.97010533998294]
We introduce a style-promptable image generation pipeline which can accurately generate arbitrary-style images under the control of style vectors.
Based on this design, we propose the score distillation based model merging paradigm (DMM), compressing multiple models into a single versatile T2I model.
Our experiments demonstrate that DMM can compactly reorganize the knowledge from multiple teacher models and achieve controllable arbitrary-style generation.
arXiv Detail & Related papers (2025-04-16T15:09:45Z) - Structured Pattern Expansion with Diffusion Models [6.726377308248659]
Recent advances in diffusion models have significantly improved the synthesis of materials, textures, and 3D shapes.
In this paper, we address the synthesis of structured, stationary patterns, where diffusion models are generally less reliable and, more importantly, less controllable.
It enables users to exercise direct control over the synthesis by expanding a partially hand-drawn pattern into a larger design while preserving the structure and details of the input.
arXiv Detail & Related papers (2024-11-12T18:39:23Z) - A Simple Approach to Unifying Diffusion-based Conditional Generation [63.389616350290595]
We introduce a simple, unified framework to handle diverse conditional generation tasks.
Our approach enables versatile capabilities via different inference-time sampling schemes.
Our model supports additional capabilities like non-spatially aligned and coarse conditioning.
arXiv Detail & Related papers (2024-10-15T09:41:43Z) - CAR: Controllable Autoregressive Modeling for Visual Generation [100.33455832783416]
Controllable AutoRegressive Modeling (CAR) is a novel, plug-and-play framework that integrates conditional control into multi-scale latent variable modeling.
CAR progressively refines and captures control representations, which are injected into each autoregressive step of the pre-trained model to guide the generation process.
Our approach demonstrates excellent controllability across various types of conditions and delivers higher image quality compared to previous methods.
arXiv Detail & Related papers (2024-10-07T00:55:42Z) - ChatPattern: Layout Pattern Customization via Natural Language [18.611898021267923]
ChatPattern is a novel Large-Language-Model powered framework for flexible pattern customization.
LLM agent can interpret natural language requirements and operate design tools to meet specified needs.
generator excels in conditional layout generation, pattern modification, and memory-friendly patterns extension.
arXiv Detail & Related papers (2024-03-15T09:15:22Z) - Desigen: A Pipeline for Controllable Design Template Generation [69.51563467689795]
Desigen is an automatic template creation pipeline which generates background images as well as layout elements over the background.
We propose two techniques to constrain the saliency distribution and reduce the attention weight in desired regions during the background generation process.
Experiments demonstrate that the proposed pipeline generates high-quality templates comparable to human designers.
arXiv Detail & Related papers (2024-03-14T04:32:28Z) - DivCon: Divide and Conquer for Progressive Text-to-Image Generation [0.0]
Diffusion-driven text-to-image (T2I) generation has achieved remarkable advancements.
layout is employed as an intermedium to bridge large language models and layout-based diffusion models.
We introduce a divide-and-conquer approach which decouples the T2I generation task into simple subtasks.
arXiv Detail & Related papers (2024-03-11T03:24:44Z) - Towards Aligned Layout Generation via Diffusion Model with Aesthetic Constraints [53.66698106829144]
We propose a unified model to handle a broad range of layout generation tasks.
The model is based on continuous diffusion models.
Experiment results show that LACE produces high-quality layouts.
arXiv Detail & Related papers (2024-02-07T11:12:41Z) - Make-A-Shape: a Ten-Million-scale 3D Shape Model [52.701745578415796]
This paper introduces Make-A-Shape, a new 3D generative model designed for efficient training on a vast scale.
We first innovate a wavelet-tree representation to compactly encode shapes by formulating the subband coefficient filtering scheme.
We derive the subband adaptive training strategy to train our model to effectively learn to generate coarse and detail wavelet coefficients.
arXiv Detail & Related papers (2024-01-20T00:21:58Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - DiffPattern: Layout Pattern Generation via Discrete Diffusion [16.148506119712735]
We propose toolDiffPattern to generate reliable layout patterns.
Our experiments on several benchmark settings show that toolDiffPattern significantly outperforms existing baselines.
arXiv Detail & Related papers (2023-03-23T06:16:14Z) - LayoutDiffusion: Improving Graphic Layout Generation by Discrete
Diffusion Probabilistic Models [50.73105631853759]
We present a novel generative model named LayoutDiffusion for automatic layout generation.
It learns to reverse a mild forward process, in which layouts become increasingly chaotic with the growth of forward steps.
It enables two conditional layout generation tasks in a plug-and-play manner without re-training and achieves better performance than existing methods.
arXiv Detail & Related papers (2023-03-21T04:41:02Z) - LayoutDETR: Detection Transformer Is a Good Multimodal Layout Designer [80.61492265221817]
Graphic layout designs play an essential role in visual communication.
Yet handcrafting layout designs is skill-demanding, time-consuming, and non-scalable to batch production.
Generative models emerge to make design automation scalable but it remains non-trivial to produce designs that comply with designers' desires.
arXiv Detail & Related papers (2022-12-19T21:57:35Z) - Learning Layout and Style Reconfigurable GANs for Controllable Image
Synthesis [12.449076001538552]
This paper focuses on a recent emerged task, layout-to-image, to learn generative models capable of synthesizing photo-realistic images from spatial layout.
Style control at the image level is the same as in vanilla GANs, while style control at the object mask level is realized by a proposed novel feature normalization scheme.
In experiments, the proposed method is tested in the COCO-Stuff dataset and the Visual Genome dataset with state-of-the-art performance obtained.
arXiv Detail & Related papers (2020-03-25T18:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.