OL-Transformer: A Fast and Universal Surrogate Simulator for Optical
Multilayer Thin Film Structures
- URL: http://arxiv.org/abs/2305.11984v2
- Date: Sat, 22 Jul 2023 01:11:40 GMT
- Title: OL-Transformer: A Fast and Universal Surrogate Simulator for Optical
Multilayer Thin Film Structures
- Authors: Taigao Ma, Haozhu Wang, L. Jay Guo
- Abstract summary: We propose the Opto-Layer Transformer to act as a universal surrogate simulator for enormous types of structures.
Our model can predict accurate reflection and transmission spectra for up to $1025$ different multilayer structures.
- Score: 1.2891210250935143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based methods have recently been established as fast and
accurate surrogate simulators for optical multilayer thin film structures.
However, existing methods only work for limited types of structures with
different material arrangements, preventing their applications towards diverse
and universal structures. Here, we propose the Opto-Layer (OL) Transformer to
act as a universal surrogate simulator for enormous types of structures.
Combined with the technique of structure serialization, our model can predict
accurate reflection and transmission spectra for up to $10^{25}$ different
multilayer structures, while still achieving a six-fold degradation in
simulation time compared to physical solvers. Further investigation reveals
that the general learning ability comes from the fact that our model first
learns the physical embeddings and then uses the self-attention mechanism to
capture the hidden relationship of light-matter interaction between each layer.
Related papers
- Provably Transformers Harness Multi-Concept Word Semantics for Efficient In-Context Learning [53.685764040547625]
Transformer-based large language models (LLMs) have displayed remarkable creative prowess and emergence capabilities.
This work provides a fine mathematical analysis to show how transformers leverage the multi-concept semantics of words to enable powerful ICL and excellent out-of-distribution ICL abilities.
arXiv Detail & Related papers (2024-11-04T15:54:32Z) - What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian Analysis [8.008567379796666]
The Transformer architecture has inarguably revolutionized deep learning.
At its core, the attention block differs in form and functionality from most other architectural components in deep learning.
The root causes behind these outward manifestations, and the precise mechanisms that govern them, remain poorly understood.
arXiv Detail & Related papers (2024-10-14T18:15:02Z) - Fast and Reliable Probabilistic Reflectometry Inversion with Prior-Amortized Neural Posterior Estimation [73.81105275628751]
Finding all structures compatible with reflectometry data is computationally prohibitive for standard algorithms.
We address this lack of reliability with a probabilistic deep learning method that identifies all realistic structures in seconds.
Our method, Prior-Amortized Neural Posterior Estimation (PANPE), combines simulation-based inference with novel adaptive priors.
arXiv Detail & Related papers (2024-07-26T10:29:16Z) - How Do Transformers Learn In-Context Beyond Simple Functions? A Case
Study on Learning with Representations [98.7450564309923]
This paper takes initial steps on understanding in-context learning (ICL) in more complex scenarios, by studying learning with representations.
We construct synthetic in-context learning problems with a compositional structure, where the label depends on the input through a possibly complex but fixed representation function.
We show theoretically the existence of transformers that approximately implement such algorithms with mild depth and size.
arXiv Detail & Related papers (2023-10-16T17:40:49Z) - A Hierarchical Architecture for Neural Materials [13.144139872006287]
We introduce a neural appearance model that offers a new level of accuracy.
An inception-based core network structure captures material appearances at multiple scales.
We encode the inputs into frequency space, introduce a gradient-based loss, and employ it adaptive to the progress of the learning phase.
arXiv Detail & Related papers (2023-07-19T17:00:45Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Virtual twins of nonlinear vibrating multiphysics microstructures:
physics-based versus deep learning-based approaches [0.0]
We apply deep learning techniques to generate accurate, efficient and real-time reduced order models.
We extensively test the reliability of the proposed procedures on micromirrors, arches and gyroscopes.
By addressing an electromechanical gyroscope, we show that the non-intrusive deep learning approach generalizes easily to complex multiphysics problems.
arXiv Detail & Related papers (2022-05-12T07:40:35Z) - Disentangling multiple scattering with deep learning: application to
strain mapping from electron diffraction patterns [48.53244254413104]
We implement a deep neural network called FCU-Net to invert highly nonlinear electron diffraction patterns into quantitative structure factor images.
We trained the FCU-Net using over 200,000 unique dynamical diffraction patterns which include many different combinations of crystal structures.
Our simulated diffraction pattern library, implementation of FCU-Net, and trained model weights are freely available in open source repositories.
arXiv Detail & Related papers (2022-02-01T03:53:39Z) - TMM-Fast: A Transfer Matrix Computation Package for Multilayer Thin-Film
Optimization [62.997667081978825]
An advanced thin-film structure can consist of multiple materials with different thicknesses and numerous layers.
Design and optimization of complex thin-film structures with multiple variables is a computationally heavy problem that is still under active research.
We propose the Python package TMM-Fast which enables parallelized computation of reflection and transmission of light at different angles of incidence and wavelengths through the multilayer thin-film.
arXiv Detail & Related papers (2021-11-24T14:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.