Neural network analysis of neutron and X-ray reflectivity data:
Incorporating prior knowledge for tackling the phase problem
- URL: http://arxiv.org/abs/2307.05364v1
- Date: Wed, 28 Jun 2023 11:15:53 GMT
- Title: Neural network analysis of neutron and X-ray reflectivity data:
Incorporating prior knowledge for tackling the phase problem
- Authors: Valentin Munteanu, Vladimir Starostin, Alessandro Greco, Linus Pithan,
Alexander Gerlach, Alexander Hinderhofer, Stefan Kowarik, Frank Schreiber
- Abstract summary: We present an approach that utilizes prior knowledge to regularize the training process over larger parameter spaces.
We demonstrate the effectiveness of our method in various scenarios, including multilayer structures with box model parameterization.
In contrast to previous methods, our approach scales favorably when increasing the complexity of the inverse problem.
- Score: 141.5628276096321
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the lack of phase information, determining the physical parameters of
multilayer thin films from measured neutron and X-ray reflectivity curves is,
on a fundamental level, an underdetermined inverse problem. This so-called
phase problem poses limitations on standard neural networks, constraining the
range and number of considered parameters in previous machine learning
solutions. To overcome this, we present an approach that utilizes prior
knowledge to regularize the training process over larger parameter spaces. We
demonstrate the effectiveness of our method in various scenarios, including
multilayer structures with box model parameterization and a physics-inspired
special parameterization of the scattering length density profile for a
multilayer structure. By leveraging the input of prior knowledge, we can
improve the training dynamics and address the underdetermined ("ill-posed")
nature of the problem. In contrast to previous methods, our approach scales
favorably when increasing the complexity of the inverse problem, working
properly even for a 5-layer multilayer model and an N-layer periodic multilayer
model with up to 17 open parameters.
Related papers
- Physics-informed Mesh-independent Deep Compositional Operator Network [1.2430809884830318]
We introduce a novel physics-informed model architecture which can generalize to parameter discretizations of variable size and irregular domain shapes.
Inspired by deep operator neural networks, our model involves a discretization-independent learning of parameter embedding repeatedly.
arXiv Detail & Related papers (2024-04-21T12:41:30Z) - Explaining the Machine Learning Solution of the Ising Model [0.0]
This work shows how it can be accomplished for the ferromagnetic Ising model, the main target of several machine learning (ML) studies in statistical physics.
By using a neural network (NN) without hidden layers (the simplest possible) and informed by the symmetry of the Hamiltonian, an explanation is provided for the strategy used in finding the supervised learning solution.
These results pave the way to a physics-informed explainable generalized framework, enabling the extraction of physical laws and principles from the parameters of the models.
arXiv Detail & Related papers (2024-02-18T20:47:33Z) - Training Integrable Parameterizations of Deep Neural Networks in the
Infinite-Width Limit [0.0]
Large-width dynamics has emerged as a fruitful viewpoint and led to practical insights on real-world deep networks.
For two-layer neural networks, it has been understood that the nature of the trained model radically changes depending on the scale of the initial random weights.
We propose various methods to avoid this trivial behavior and analyze in detail the resulting dynamics.
arXiv Detail & Related papers (2021-10-29T07:53:35Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Hybrid neural network reduced order modelling for turbulent flows with
geometric parameters [0.0]
This paper introduces a new technique mixing up a classical Galerkin-projection approach together with a data-driven method to obtain a versatile and accurate algorithm for the resolution of geometrically parametrized incompressible turbulent Navier-Stokes problems.
The effectiveness of this procedure is demonstrated on two different test cases: a classical academic back step problem and a shape deformation Ahmed body application.
arXiv Detail & Related papers (2021-07-20T16:06:18Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - Multi-Task Learning for Multi-Dimensional Regression: Application to
Luminescence Sensing [0.0]
A new approach to non-linear regression is to use neural networks, particularly feed-forward architectures with a sufficient number of hidden layers and an appropriate number of output neurons.
We propose multi-task learning (MTL) architectures. These are characterized by multiple branches of task-specific layers, which have as input the output of a common set of layers.
To demonstrate the power of this approach for multi-dimensional regression, the method is applied to luminescence sensing.
arXiv Detail & Related papers (2020-07-27T21:23:51Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.