Cross Section Doppler Broadening prediction using Physically Informed
Deep Neural Networks
- URL: http://arxiv.org/abs/2208.07224v1
- Date: Thu, 11 Aug 2022 19:56:57 GMT
- Title: Cross Section Doppler Broadening prediction using Physically Informed
Deep Neural Networks
- Authors: Arthur Pignet, Luiz Leal and Vaibhav Jaiswal
- Abstract summary: Temperature dependence of the neutron-nucleus interaction is known as the Doppler broadening of the cross-sections.
This paper explores a novel non-linear approach based on deep learning techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Temperature dependence of the neutron-nucleus interaction is known as the
Doppler broadening of the cross-sections. This is a well-known effect due to
the thermal motion of the target nuclei that occurs in the neutron-nucleus
interaction. The fast computation of such effects is crucial for any nuclear
application. Mechanisms have been developed that allow determining the Doppler
effects in the cross-section, most of them based on the numerical resolution of
the equation known as Solbrig's kernel, which is a cross-section Doppler
broadening formalism derived from a free gas atoms distribution hypothesis.
This paper explores a novel non-linear approach based on deep learning
techniques. Deep neural networks are trained on synthetic and experimental
data, serving as an alternative to the cross-section Doppler Broadening (DB).
This paper explores the possibility of using physically informed neural
networks, where the network is physically regularized to be the solution of a
partial derivative equation, inferred from Solbrig's kernel. The learning
process is demonstrated by using the fission, capture, and scattering cross
sections for $^{235}U$ in the energy range from thermal to 2250 eV.
Related papers
- Novel Kernel Models and Exact Representor Theory for Neural Networks Beyond the Over-Parameterized Regime [52.00917519626559]
This paper presents two models of neural-networks and their training applicable to neural networks of arbitrary width, depth and topology.
We also present an exact novel representor theory for layer-wise neural network training with unregularized gradient descent in terms of a local-extrinsic neural kernel (LeNK)
This representor theory gives insight into the role of higher-order statistics in neural network training and the effect of kernel evolution in neural-network kernel models.
arXiv Detail & Related papers (2024-05-24T06:30:36Z) - Controlling the Inductive Bias of Wide Neural Networks by Modifying the Kernel's Spectrum [18.10812063219831]
We introduce Modified Spectrum Kernels (MSKs) to approximate kernels with desired eigenvalues.
We propose a preconditioned gradient descent method, which alters the trajectory of gradient descent.
Our method is both computationally efficient and simple to implement.
arXiv Detail & Related papers (2023-07-26T22:39:47Z) - Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural
Networks [4.604003661048267]
Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations.
We propose a comprehensive theoretical framework that sheds light on this important problem.
We derive an integro-differential equation that governs PINN prediction in the large data-set limit.
arXiv Detail & Related papers (2023-07-12T18:00:02Z) - Dilute neutron star matter from neural-network quantum states [58.720142291102135]
Low-density neutron matter is characterized by the formation of Cooper pairs and the onset of superfluidity.
We model this density regime by capitalizing on the expressivity of the hidden-nucleon neural-network quantum states combined with variational Monte Carlo and reconfiguration techniques.
arXiv Detail & Related papers (2022-12-08T17:55:25Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Intrinsic mechanisms for drive-dependent Purcell decay in
superconducting quantum circuits [68.8204255655161]
We find that in a wide range of settings, the cavity-qubit detuning controls whether a non-zero photonic population increases or decreases qubit decay Purcell.
Our method combines insights from a Keldysh treatment of the system, and Lindblad theory.
arXiv Detail & Related papers (2021-06-09T16:21:31Z) - Solving the electronic Schr\"odinger equation for multiple nuclear
geometries with weight-sharing deep neural networks [4.1201966507589995]
We introduce a weight-sharing constraint when optimizing neural network-based models for different molecular geometries.
We find that this technique can accelerate optimization when considering sets of nuclear geometries of the same molecule by an order of magnitude.
arXiv Detail & Related papers (2021-05-18T08:23:09Z) - HypoSVI: Hypocenter inversion with Stein variational inference and
Physics Informed Neural Networks [6.102077733475759]
We introduce a scheme for Distributed Acoustic inversion with Steinal variation.
Our approach uses a differentiable forward model in the form of a neural network.
We show that the demands scale efficiently with the number of differential times.
arXiv Detail & Related papers (2021-01-09T01:56:48Z) - Extracting Electron Scattering Cross Sections from Swarm Data using Deep
Neural Networks [2.28438857884398]
We implement artificial neural network (ANN), convolutional neural network (CNN) and densely connected convolutional network (DenseNet) for this investigation.
We test the validity of predictions by all these trained networks for a broad range of gas species.
arXiv Detail & Related papers (2020-11-30T11:48:15Z) - A multiconfigurational study of the negatively charged nitrogen-vacancy
center in diamond [55.58269472099399]
Deep defects in wide band gap semiconductors have emerged as leading qubit candidates for realizing quantum sensing and information applications.
Here we show that unlike single-particle treatments, the multiconfigurational quantum chemistry methods, traditionally reserved for atoms/molecules, accurately describe the many-body characteristics of the electronic states of these defect centers.
arXiv Detail & Related papers (2020-08-24T01:49:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.