Investigation of inverse design of multilayer thin-films with
conditional invertible Neural Networks
- URL: http://arxiv.org/abs/2210.04629v1
- Date: Mon, 10 Oct 2022 12:29:20 GMT
- Title: Investigation of inverse design of multilayer thin-films with
conditional invertible Neural Networks
- Authors: Alexander Luce, Ali Mahdavi, Heribert Wankerl, Florian Marquardt
- Abstract summary: We use conditional Invertible Neural Networks (cINNs) to inversely design multilayer thin-films given an optical target.
We show that cINNs can generate proposals for thin-film configurations that are reasonably close to the desired target depending on random variables.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of designing optical multilayer thin-films regarding a given target
is currently solved using gradient-based optimization in conjunction with
methods that can introduce additional thin-film layers. Recently, Deep Learning
and Reinforcement Learning have been been introduced to the task of designing
thin-films with great success, however a trained network is usually only able
to become proficient for a single target and must be retrained if the optical
targets are varied. In this work, we apply conditional Invertible Neural
Networks (cINN) to inversely designing multilayer thin-films given an optical
target. Since the cINN learns the energy landscape of all thin-film
configurations within the training dataset, we show that cINNs can generate a
stochastic ensemble of proposals for thin-film configurations that that are
reasonably close to the desired target depending only on random variables. By
refining the proposed configurations further by a local optimization, we show
that the generated thin-films reach the target with significantly greater
precision than comparable state-of-the art approaches. Furthermore, we tested
the generative capabilities on samples which are outside the training data
distribution and found that the cINN was able to predict thin-films for
out-of-distribution targets, too. The results suggest that in order to improve
the generative design of thin-films, it is instructive to use established and
new machine learning methods in conjunction in order to obtain the most
favorable results.
Related papers
- Meta-Sparsity: Learning Optimal Sparse Structures in Multi-task Networks through Meta-learning [4.462334751640166]
meta-sparsity is a framework for learning model sparsity that allows deep neural networks (DNNs) to generate optimal sparse shared structures in multi-task learning setting.
Inspired by Model Agnostic Meta-Learning (MAML), the emphasis is on learning shared and optimally sparse parameters in multi-task scenarios.
The effectiveness of meta-sparsity is rigorously evaluated by extensive experiments on two datasets.
arXiv Detail & Related papers (2025-01-21T13:25:32Z) - Do We Need to Design Specific Diffusion Models for Different Tasks? Try ONE-PIC [77.8851460746251]
We propose a simple, efficient, and general approach to fine-tune diffusion models.
ONE-PIC enhances the inherited generative ability in the pretrained diffusion models without introducing additional modules.
Our method is simple and efficient which streamlines the adaptation process and achieves excellent performance with lower costs.
arXiv Detail & Related papers (2024-12-07T11:19:32Z) - Generative Neural Fields by Mixtures of Neural Implicit Functions [43.27461391283186]
We propose a novel approach to learning the generative neural fields represented by linear combinations of implicit basis networks.
Our algorithm learns basis networks in the form of implicit neural representations and their coefficients in a latent space by either conducting meta-learning or adopting auto-decoding paradigms.
arXiv Detail & Related papers (2023-10-30T11:41:41Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors.
LFP decomposes a reward to individual neurons based on their respective contributions to solving a given task.
Our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Protein Design with Guided Discrete Diffusion [67.06148688398677]
A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling.
We propose diffusioN Optimized Sampling (NOS), a guidance method for discrete diffusion models.
NOS makes it possible to perform design directly in sequence space, circumventing significant limitations of structure-based methods.
arXiv Detail & Related papers (2023-05-31T16:31:24Z) - RLFlow: Optimising Neural Network Subgraph Transformation with World
Models [0.0]
We propose a model-based agent which learns to optimise the architecture of neural networks by performing a sequence of subgraph transformations to reduce model runtime.
We show our approach can match the performance of state of the art on common convolutional networks and outperform those by up to 5% on transformer-style architectures.
arXiv Detail & Related papers (2022-05-03T11:52:54Z) - TMM-Fast: A Transfer Matrix Computation Package for Multilayer Thin-Film
Optimization [62.997667081978825]
An advanced thin-film structure can consist of multiple materials with different thicknesses and numerous layers.
Design and optimization of complex thin-film structures with multiple variables is a computationally heavy problem that is still under active research.
We propose the Python package TMM-Fast which enables parallelized computation of reflection and transmission of light at different angles of incidence and wavelengths through the multilayer thin-film.
arXiv Detail & Related papers (2021-11-24T14:47:37Z) - Follow Your Path: a Progressive Method for Knowledge Distillation [23.709919521355936]
We propose ProKT, a new model-agnostic method by projecting the supervision signals of a teacher model into the student's parameter space.
Experiments on both image and text datasets show that our proposed ProKT consistently achieves superior performance compared to other existing knowledge distillation methods.
arXiv Detail & Related papers (2021-07-20T07:44:33Z) - A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation [86.35434065681925]
This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA)
Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames.
Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
arXiv Detail & Related papers (2020-10-02T11:40:09Z) - Multi-objective and categorical global optimization of photonic
structures based on ResNet generative neural networks [0.0]
A residual network scheme enables GLOnets to evolve from a deep architecture to a shallow network that generates a narrow distribution of globally optimal devices.
We show that GLOnets can find the global optimum with orders of magnitude faster speeds compared to conventional algorithms.
Results indicate that advanced concepts in deep learning can push the capabilities of inverse design algorithms for photonics.
arXiv Detail & Related papers (2020-07-20T06:50:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.