Large Scale Mask Optimization Via Convolutional Fourier Neural Operator
and Litho-Guided Self Training
- URL: http://arxiv.org/abs/2207.04056v1
- Date: Fri, 8 Jul 2022 16:39:31 GMT
- Title: Large Scale Mask Optimization Via Convolutional Fourier Neural Operator
and Litho-Guided Self Training
- Authors: Haoyu Yang, Zongyi Li, Kumara Sastry, Saumyadip Mukhopadhyay, Anima
Anandkumar, Brucek Khailany, Vivek Singh, Haoxing Ren
- Abstract summary: We present a Convolutional Neural Operator (CFCF) that can efficiently learn mask tasks.
For the first time, our machine learning-based framework outperforms state-of-the-art numerical mask dataset.
- Score: 54.16367467777526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning techniques have been extensively studied for mask
optimization problems, aiming at better mask printability, shorter turnaround
time, better mask manufacturability, and so on. However, most of these
researches are focusing on the initial solution generation of small design
regions. To further realize the potential of machine learning techniques on
mask optimization tasks, we present a Convolutional Fourier Neural Operator
(CFNO) that can efficiently learn layout tile dependencies and hence promise
stitch-less large-scale mask optimization with the limited intervention of
legacy tools. We discover the possibility of litho-guided self-training (LGST)
through a trained machine learning model when solving non-convex optimization
problems, which allows iterative model and dataset update and brings
significant model performance improvement. Experimental results show that, for
the first time, our machine learning-based framework outperforms
state-of-the-art academic numerical mask optimizers with an order of magnitude
speedup.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - Beyond Linear Approximations: A Novel Pruning Approach for Attention Matrix [17.086679273053853]
Large Language Models (LLMs) have shown immense potential in enhancing various aspects of our daily lives.
Their growing capabilities come at the cost of extremely large model sizes, making deployment on edge devices challenging.
This paper introduces a novel approach to LLM weight pruning that directly optimize for approximating the attention matrix.
arXiv Detail & Related papers (2024-10-15T04:35:56Z) - ILILT: Implicit Learning of Inverse Lithography Technologies [5.373749225521622]
We propose an implicit learning ILT: ILILT, which leverages the implicit learning inputs to ground-conditioned ILT solutions, significantly improving efficiency and quality.
arXiv Detail & Related papers (2024-05-06T15:49:46Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - An Adversarial Active Sampling-based Data Augmentation Framework for
Manufacturable Chip Design [55.62660894625669]
Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable.
Recent developments in machine learning have provided alternative solutions in replacing the time-consuming lithography simulations with deep neural networks.
We propose a litho-aware data augmentation framework to resolve the dilemma of limited data and improve the machine learning model performance.
arXiv Detail & Related papers (2022-10-27T20:53:39Z) - Learning the Quality of Machine Permutations in Job Shop Scheduling [9.972171952370287]
We propose a novel supervised learning task that aims at predicting the quality of machine permutations.
Then, we design an original methodology to estimate this quality that allows to create an accurate sequential deep learning model.
arXiv Detail & Related papers (2022-07-07T11:53:10Z) - Machine Learning Constructives and Local Searches for the Travelling
Salesman Problem [7.656272344163667]
We present improvements to the computational weight of the original deep learning model.
The possibility of adding a local-search phase is explored to further improve performance.
arXiv Detail & Related papers (2021-08-02T14:34:44Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Masking as an Efficient Alternative to Finetuning for Pretrained
Language Models [49.64561153284428]
We learn selective binary masks for pretrained weights in lieu of modifying them through finetuning.
In intrinsic evaluations, we show that representations computed by masked language models encode information necessary for solving downstream tasks.
arXiv Detail & Related papers (2020-04-26T15:03:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.