Empirical fits to inclusive electron-carbon scattering data obtained by deep-learning methods
- URL: http://arxiv.org/abs/2312.17298v2
- Date: Tue, 16 Jul 2024 09:00:04 GMT
- Title: Empirical fits to inclusive electron-carbon scattering data obtained by deep-learning methods
- Authors: Beata E. Kowal, Krzysztof M. Graczyk, Artur M. Ankowski, Rwik Dharmapal Banerjee, Hemant Prasad, Jan T. Sobczyk,
- Abstract summary: We obtain empirical fits to the electron-scattering cross sections for carbon over a broad kinematic region.
We consider two different methods of obtaining such model-independent parametrizations and the corresponding uncertainties.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Employing the neural network framework, we obtain empirical fits to the electron-scattering cross sections for carbon over a broad kinematic region, extending from the quasielastic peak through resonance excitation to the onset of deep-inelastic scattering. We consider two different methods of obtaining such model-independent parametrizations and the corresponding uncertainties: based on the bootstrap approach and the Monte Carlo dropout approach. In our analysis, the $\chi^2$ defines the loss function, including point-to-point and normalization uncertainties for each independent set of measurements. Our statistical approaches lead to fits of comparable quality and similar uncertainties of the order of $7$%. To test these models, we compare their predictions to test datasets excluded from the training process and theoretical predictions obtained within the spectral function approach. The predictions of both models agree with experimental measurements and theoretical calculations. We also perform a comparison to a dataset lying beyond the covered kinematic region, and find that the bootstrap approach shows better interpolation and extrapolation abilities than the one based on the dropout algorithm.
Related papers
- Predicting path-dependent processes by deep learning [0.5893124686141782]
We investigate a deep learning method for predicting path-dependent processes based on discretely observed historical information.
With the frequency of discrete observations tending to infinity, the predictions based on discrete observations converge to the predictions based on continuous observations.
We apply the method to the fractional Brownian motion and the fractional O-Uhlenbeck process as examples.
arXiv Detail & Related papers (2024-08-19T12:24:25Z) - Estimation of multiple mean vectors in high dimension [4.2466572124753]
We endeavour to estimate numerous multi-dimensional means of various probability distributions on a common space based on independent samples.
Our approach involves forming estimators through convex combinations of empirical means derived from these samples.
arXiv Detail & Related papers (2024-03-22T08:42:41Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - A similarity-based Bayesian mixture-of-experts model [0.5156484100374058]
We present a new non-parametric mixture-of-experts model for multivariate regression problems.
Using a conditionally specified model, predictions for out-of-sample inputs are based on similarities to each observed data point.
Posterior inference is performed on the parameters of the mixture as well as the distance metric.
arXiv Detail & Related papers (2020-12-03T18:08:30Z) - Mean-Field Approximation to Gaussian-Softmax Integral with Application
to Uncertainty Estimation [23.38076756988258]
We propose a new single-model based approach to quantify uncertainty in deep neural networks.
We use a mean-field approximation formula to compute an analytically intractable integral.
Empirically, the proposed approach performs competitively when compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-06-13T07:32:38Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Maximum likelihood estimation and uncertainty quantification for
Gaussian process approximation of deterministic functions [10.319367855067476]
This article provides one of the first theoretical analyses in the context of Gaussian process regression with a noiseless dataset.
We show that the maximum likelihood estimation of the scale parameter alone provides significant adaptation against misspecification of the Gaussian process model.
arXiv Detail & Related papers (2020-01-29T17:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.