Deep learning and high harmonic generation
- URL: http://arxiv.org/abs/2012.10328v2
- Date: Mon, 4 Jan 2021 17:54:36 GMT
- Title: Deep learning and high harmonic generation
- Authors: M. Lytova and M. Spanner and I. Tamblyn
- Abstract summary: We explore the utility of various deep neural networks (NNs) when applied to high harmonic generation (HHG) scenarios.
First, we train the NNs to predict the time-dependent dipole and spectra of HHG emission from reduced-dimensionality models of di- and triatomic systems.
We then demonstrate that transfer learning can be applied to our networks to expand the range of applicability of the networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using machine learning, we explore the utility of various deep neural
networks (NN) when applied to high harmonic generation (HHG) scenarios. First,
we train the NNs to predict the time-dependent dipole and spectra of HHG
emission from reduced-dimensionality models of di- and triatomic systems based
of on sets of randomly generated parameters (laser pulse intensity,
internuclear distance, and molecular orientation). These networks, once
trained, are useful tools to rapidly generate the HHG spectra of our systems.
Similarly, we have trained the NNs to solve the inverse problem - to determine
the molecular parameters based on HHG spectra or dipole acceleration data.
These types of networks could then be used as spectroscopic tools to invert HHG
spectra in order to recover the underlying physical parameters of a system.
Next, we demonstrate that transfer learning can be applied to our networks to
expand the range of applicability of the networks with only a small number of
new test cases added to our training sets. Finally, we demonstrate NNs that can
be used to classify molecules by type: di- or triatomic, symmetric or
asymmetric, wherein we can even rely on fairly simple fully connected neural
networks. With outlooks toward training with experimental data, these NN
topologies offer a novel set of spectroscopic tools that could be incorporated
into HHG experiments.
Related papers
- Spectrum-Informed Multistage Neural Networks: Multiscale Function Approximators of Machine Precision [1.2663244405597374]
We propose using the novel multistage neural network approach to learn the residue from the previous stage.
We successfully tackle the spectral bias of neural networks.
This approach allows the neural network to fit target functions to double floating-point machine precision.
arXiv Detail & Related papers (2024-07-24T12:11:09Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Autoregressive neural-network wavefunctions for ab initio quantum
chemistry [3.5987961950527287]
We parameterise the electronic wavefunction with a novel autoregressive neural network (ARN)
This allows us to perform electronic structure calculations on molecules with up to 30 spin-orbitals.
arXiv Detail & Related papers (2021-09-26T13:44:41Z) - Adaptable Hamiltonian neural networks [0.0]
Hamiltonian Neural Networks (HNNs) represent a major class of physics-enhanced neural networks.
We introduce a class of HNNs capable of adaptable prediction of nonlinear physical systems.
We show that our parameter-cognizant HNN can successfully predict the route of transition to chaos.
arXiv Detail & Related papers (2021-02-25T23:53:51Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.