Invertible Surrogate Models: Joint surrogate modelling and
reconstruction of Laser-Wakefield Acceleration by invertible neural networks
- URL: http://arxiv.org/abs/2106.00432v1
- Date: Tue, 1 Jun 2021 12:26:10 GMT
- Title: Invertible Surrogate Models: Joint surrogate modelling and
reconstruction of Laser-Wakefield Acceleration by invertible neural networks
- Authors: Friedrich Bethke, Richard Pausch, Patrick Stiller, Alexander Debus,
Michael Bussmann, Nico Hoffmann
- Abstract summary: Invertible neural networks are a recent technique in machine learning.
We will be introducing invertible surrogate models that approximate complex forward simulation of the physics involved in laser plasma accelerators: iLWFA.
- Score: 55.41644538483948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Invertible neural networks are a recent technique in machine learning
promising neural network architectures that can be run in forward and reverse
mode. In this paper, we will be introducing invertible surrogate models that
approximate complex forward simulation of the physics involved in laser plasma
accelerators: iLWFA. The bijective design of the surrogate model also provides
all means for reconstruction of experimentally acquired diagnostics. The
quality of our invertible laser wakefield acceleration network will be verified
on a large set of numerical LWFA simulations.
Related papers
- Neural Residual Diffusion Models for Deep Scalable Vision Generation [17.931568104324985]
We propose a unified and massively scalable Neural Residual Diffusion Models framework (Neural-RDM)
The proposed neural residual models obtain state-of-the-art scores on image's and video's generative benchmarks.
arXiv Detail & Related papers (2024-06-19T04:57:18Z) - 1-bit Quantized On-chip Hybrid Diffraction Neural Network Enabled by Authentic All-optical Fully-connected Architecture [4.594367761345624]
This study introduces the Hybrid Diffraction Neural Network (HDNN), a novel architecture that incorporates matrix multiplication into DNNs.
utilizing a singular phase modulation layer and an amplitude modulation layer, the trained neural network demonstrated remarkable accuracies of 96.39% and 89% in digit recognition tasks.
arXiv Detail & Related papers (2024-04-11T02:54:17Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - An Adversarial Active Sampling-based Data Augmentation Framework for
Manufacturable Chip Design [55.62660894625669]
Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable.
Recent developments in machine learning have provided alternative solutions in replacing the time-consuming lithography simulations with deep neural networks.
We propose a litho-aware data augmentation framework to resolve the dilemma of limited data and improve the machine learning model performance.
arXiv Detail & Related papers (2022-10-27T20:53:39Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - A Spiking Central Pattern Generator for the control of a simulated
lamprey robot running on SpiNNaker and Loihi neuromorphic boards [1.8139771201780368]
We propose a spiking neural network and its implementation on neuromorphic hardware as a means to control a simulated lamprey model.
We show that by modifying the input to the network, which can be provided by sensory information, the robot can be controlled dynamically in direction and pace.
This category of spiking algorithms shows a promising potential to exploit the theoretical advantages of neuromorphic hardware in terms of energy efficiency and computational speed.
arXiv Detail & Related papers (2021-01-18T11:04:16Z) - Sobolev training of thermodynamic-informed neural networks for smoothed
elasto-plasticity models with level set hardening [0.0]
We introduce a deep learning framework designed to train smoothed elastoplasticity models with interpretable components.
By recasting the yield function as an evolving level set, we introduce a machine learning approach to predict the solutions of the Hamilton-Jacobi equation.
arXiv Detail & Related papers (2020-10-15T22:43:32Z) - Neural Cellular Automata Manifold [84.08170531451006]
We show that the neural network architecture of the Neural Cellular Automata can be encapsulated in a larger NN.
This allows us to propose a new model that encodes a manifold of NCA, each of them capable of generating a distinct image.
In biological terms, our approach would play the role of the transcription factors, modulating the mapping of genes into specific proteins that drive cellular differentiation.
arXiv Detail & Related papers (2020-06-22T11:41:57Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.