DevelSet: Deep Neural Level Set for Instant Mask Optimization
- URL: http://arxiv.org/abs/2303.12529v1
- Date: Sat, 18 Mar 2023 13:48:53 GMT
- Title: DevelSet: Deep Neural Level Set for Instant Mask Optimization
- Authors: Guojin Chen, Ziyang Yu, Hongduo Liu, Yuzhe Ma, Bei Yu
- Abstract summary: inverse lithography technique (ILT) has drawn significant attention and is becoming prevalent in emerging OPC solutions.
In this paper, we present DevelSet, a GPU and deep neural network (DNN) accelerated level set OPC framework for metal layer.
- Score: 11.847061281805463
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the feature size continuously shrinking in advanced technology nodes,
mask optimization is increasingly crucial in the conventional design flow,
accompanied by an explosive growth in prohibitive computational overhead in
optical proximity correction (OPC) methods. Recently, inverse lithography
technique (ILT) has drawn significant attention and is becoming prevalent in
emerging OPC solutions. However, ILT methods are either time-consuming or in
weak performance of mask printability and manufacturability. In this paper, we
present DevelSet, a GPU and deep neural network (DNN) accelerated level set OPC
framework for metal layer. We first improve the conventional level set-based
ILT algorithm by introducing the curvature term to reduce mask complexity and
applying GPU acceleration to overcome computational bottlenecks. To further
enhance printability and fast iterative convergence, we propose a novel deep
neural network delicately designed with level set intrinsic principles to
facilitate the joint optimization of DNN and GPU accelerated level set
optimizer. Experimental results show that DevelSet framework surpasses the
state-of-the-art methods in printability and boost the runtime performance
achieving instant level (around 1 second).
Related papers
- Fast, Scalable, Warm-Start Semidefinite Programming with Spectral
Bundling and Sketching [53.91395791840179]
We present Unified Spectral Bundling with Sketching (USBS), a provably correct, fast and scalable algorithm for solving massive SDPs.
USBS provides a 500x speed-up over the state-of-the-art scalable SDP solver on an instance with over 2 billion decision variables.
arXiv Detail & Related papers (2023-12-19T02:27:22Z) - Check-Agnosia based Post-Processor for Message-Passing Decoding of Quantum LDPC Codes [3.4602940992970908]
We introduce a new post-processing algorithm with a hardware-friendly orientation, providing error correction performance competitive to the state-of-the-art techniques.
We show that latency values close to one microsecond can be obtained on the FPGA board, and provide evidence that much lower latency values can be obtained for ASIC implementations.
arXiv Detail & Related papers (2023-10-23T14:51:22Z) - Inverse Lithography Physics-informed Deep Neural Level Set for Mask
Optimization [0.8547032097715571]
Level set-based inverse lithography technology (ILT) has drawn considerable attention as a promising OPC solution.
Deep learning (DL) methods have shown great potential in accelerating ILT.
We propose an inverse lithography physics-informed deep neural level set (ILDLS) approach for mask optimization.
arXiv Detail & Related papers (2023-08-15T01:56:22Z) - Sophisticated deep learning with on-chip optical diffractive tensor
processing [5.081061839052458]
Photonic integrated circuits provide an efficient approach to mitigate bandwidth limitations and power-wall brought by electronic counterparts.
We propose an optical computing architecture enabled by on-chip diffraction to implement convolutional acceleration, termed optical convolution unit (OCU)
With OCU as the fundamental unit, we build an optical convolutional neural network (oCNN) to implement two popular deep learning tasks: classification and regression.
arXiv Detail & Related papers (2022-12-20T03:33:26Z) - Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
Algorithm Co-design [66.39546326221176]
Attention-based neural networks have become pervasive in many AI tasks.
The use of the attention mechanism and feed-forward network (FFN) demands excessive computational and memory resources.
This paper proposes a hardware-friendly variant that adopts a unified butterfly sparsity pattern to approximate both the attention mechanism and the FFNs.
arXiv Detail & Related papers (2022-09-20T09:28:26Z) - Large Scale Mask Optimization Via Convolutional Fourier Neural Operator
and Litho-Guided Self Training [54.16367467777526]
We present a Convolutional Neural Operator (CFCF) that can efficiently learn mask tasks.
For the first time, our machine learning-based framework outperforms state-of-the-art numerical mask dataset.
arXiv Detail & Related papers (2022-07-08T16:39:31Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - Use of static surrogates in hyperparameter optimization [0.0]
This work aims at enhancing HyperNOMAD, a library that adapts a direct search derivative-free optimization algorithm to tune both the architecture and the training of a neural network simultaneously.
These additions to HyperNOMAD are shown to improve on its resources consumption without harming the quality of the proposed solutions.
arXiv Detail & Related papers (2021-03-14T16:15:53Z) - Deep unfolding of the weighted MMSE beamforming algorithm [9.518010235273783]
We propose the novel application of deep unfolding to the WMMSE algorithm for a MISO downlink channel.
Deep unfolding naturally incorporates expert knowledge, with the benefits of immediate and well-grounded architecture selection, fewer trainable parameters, and better explainability.
By means of simulations, we show that, in most of the settings, the unfolded WMMSE outperforms or performs equally to the WMMSE for a fixed number of iterations.
arXiv Detail & Related papers (2020-06-15T14:51:20Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.