Active learning of deep surrogates for PDEs: Application to metasurface
design
- URL: http://arxiv.org/abs/2008.12649v1
- Date: Mon, 24 Aug 2020 17:14:13 GMT
- Title: Active learning of deep surrogates for PDEs: Application to metasurface
design
- Authors: Rapha\"el Pestourie, Youssef Mroueh, Thanh V. Nguyen, Payel Das,
Steven G. Johnson
- Abstract summary: We present an active learning algorithm that reduces the number of training points by more than an order of magnitude for a neural-network surrogate model of optical-surface components.
Results show that the surrogate evaluation is over two orders of magnitude faster than a direct solve, and we demonstrate how this can be exploited to accelerate large-scale engineering optimization.
- Score: 30.731619528075214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Surrogate models for partial-differential equations are widely used in the
design of meta-materials to rapidly evaluate the behavior of composable
components. However, the training cost of accurate surrogates by machine
learning can rapidly increase with the number of variables. For photonic-device
models, we find that this training becomes especially challenging as design
regions grow larger than the optical wavelength. We present an active learning
algorithm that reduces the number of training points by more than an order of
magnitude for a neural-network surrogate model of optical-surface components
compared to random samples. Results show that the surrogate evaluation is over
two orders of magnitude faster than a direct solve, and we demonstrate how this
can be exploited to accelerate large-scale engineering optimization.
Related papers
- Jacobian-Enhanced Neural Networks [0.0]
Jacobian-Enhanced Neural Networks (JENN) are densely connected multi-layer perceptrons.
JENN's main benefit is better accuracy with fewer training points compared to standard neural networks.
arXiv Detail & Related papers (2024-06-13T14:04:34Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Transfer learning-assisted inverse modeling in nanophotonics based on mixture density networks [0.840835093659811]
In this paper, we propose an inverse modeling method for nanophotonic structures based on a mixture density network model enhanced by transfer learning.
The proposed approach allows overcoming these limitations using transfer learning-based techniques, while preserving a high accuracy in the prediction capability of the design solutions given an optical response as an input.
arXiv Detail & Related papers (2024-01-21T09:03:30Z) - Model-aware reinforcement learning for high-performance Bayesian
experimental design in quantum metrology [0.5461938536945721]
Quantum sensors offer control flexibility during estimation by allowing manipulation by the experimenter across various parameters.
We introduce a versatile procedure capable of optimizing a wide range of problems in quantum metrology, estimation, and hypothesis testing.
We combine model-aware reinforcement learning (RL) with Bayesian estimation based on particle filtering.
arXiv Detail & Related papers (2023-12-28T12:04:15Z) - Multi-scale Time-stepping of Partial Differential Equations with
Transformers [8.430481660019451]
We develop fast surrogates for Partial Differential Equations (PDEs)
Our model achieves similar or better results in predicting the time-evolution of Navier-Stokes equations.
arXiv Detail & Related papers (2023-11-03T20:26:43Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Retrieving space-dependent polarization transformations via near-optimal
quantum process tomography [55.41644538483948]
We investigate the application of genetic and machine learning approaches to tomographic problems.
We find that the neural network-based scheme provides a significant speed-up, that may be critical in applications requiring a characterization in real-time.
We expect these results to lay the groundwork for the optimization of tomographic approaches in more general quantum processes.
arXiv Detail & Related papers (2022-10-27T11:37:14Z) - On Fast Simulation of Dynamical System with Neural Vector Enhanced
Numerical Solver [59.13397937903832]
We introduce a deep learning-based corrector called Neural Vector (NeurVec)
NeurVec can compensate for integration errors and enable larger time step sizes in simulations.
Our experiments on a variety of complex dynamical system benchmarks demonstrate that NeurVec exhibits remarkable generalization capability.
arXiv Detail & Related papers (2022-08-07T09:02:18Z) - A Graph Deep Learning Framework for High-Level Synthesis Design Space
Exploration [11.154086943903696]
High-Level Synthesis is a solution for fast prototyping application-specific hardware.
We propose HLS, for the first time in the literature, graph neural networks that jointly predict acceleration performance and hardware costs.
We show that our approach achieves prediction accuracy comparable with that of commonly used simulators.
arXiv Detail & Related papers (2021-11-29T18:17:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.