Re-optimization of a deep neural network model for electron-carbon scattering using new experimental data
- URL: http://arxiv.org/abs/2508.00996v1
- Date: Fri, 01 Aug 2025 18:05:38 GMT
- Title: Re-optimization of a deep neural network model for electron-carbon scattering using new experimental data
- Authors: Beata E. Kowal, Krzysztof M. Graczyk, Artur M. Ankowski, Rwik Dharmapal Banerjee, Jose L. Bonilla, Hemant Prasad, Jan T. Sobczyk,
- Abstract summary: We present an updated deep neural network model for inclusive electron-carbon scattering.<n>We incorporate recent experimental data, as well as older measurements in the deep inelastic scattering region.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an updated deep neural network model for inclusive electron-carbon scattering. Using the bootstrap model [Phys.Rev.C 110 (2024) 2, 025501] as a prior, we incorporate recent experimental data, as well as older measurements in the deep inelastic scattering region, to derive a re-optimized posterior model. We examine the impact of these new inputs on model predictions and associated uncertainties. Finally, we evaluate the resulting cross-section predictions in the kinematic range relevant to the Hyper-Kamiokande and DUNE experiments.
Related papers
- Transfer Learning for Neutrino Scattering: Domain Adaptation with GANs [0.0]
We use transfer learning to extrapolate the physics knowledge encoded in a Generative Adversarial Network (GAN) model trained on synthetic charged-current (CC) neutrino-carbon inclusive scattering data.<n>We also assess the effectiveness of transfer learning in re-optimizing a custom model when new data comes from a different neutrino-nucleus interaction model.
arXiv Detail & Related papers (2025-08-18T15:08:13Z) - Uncertainty quantification of neural network models of evolving processes via Langevin sampling [0.7329200485567827]
We propose a scalable, approximate inference hypernetwork framework for a general model of history-dependent processes.<n>We demonstrate performance of the ensemble sampling hypernetwork on chemical reaction and material physics data.
arXiv Detail & Related papers (2025-04-21T04:45:40Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Analyzing Neural Network-Based Generative Diffusion Models through Convex Optimization [45.72323731094864]
We present a theoretical framework to analyze two-layer neural network-based diffusion models.
We prove that training shallow neural networks for score prediction can be done by solving a single convex program.
Our results provide a precise characterization of what neural network-based diffusion models learn in non-asymptotic settings.
arXiv Detail & Related papers (2024-02-03T00:20:25Z) - Deep Neural Networks for Semiparametric Frailty Models via H-likelihood [0.0]
We propose a new deep neural network based frailty (DNN-FM) for prediction of time-to-event data.
Joint estimators of the new h-likelihood model provide maximum likelihood for fixed parameters and best unbiased predictors for random frailties.
arXiv Detail & Related papers (2023-07-13T06:46:51Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Neural Posterior Estimation with Differentiable Simulators [58.720142291102135]
We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
arXiv Detail & Related papers (2022-07-12T16:08:04Z) - Back2Future: Leveraging Backfill Dynamics for Improving Real-time
Predictions in Future [73.03458424369657]
In real-time forecasting in public health, data collection is a non-trivial and demanding task.
'Backfill' phenomenon and its effect on model performance has been barely studied in the prior literature.
We formulate a novel problem and neural framework Back2Future that aims to refine a given model's predictions in real-time.
arXiv Detail & Related papers (2021-06-08T14:48:20Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - A robust low data solution: dimension prediction of semiconductor
nanorods [5.389015968413988]
Robust deep neural network-based regression algorithm has been developed for precise prediction of length, width, and aspect ratios of semiconductor nanorods (NRs)
Deep neural network is further applied to develop regression model which demonstrated the well performed prediction on both the original and generated data with a similar distribution.
arXiv Detail & Related papers (2020-10-27T07:51:38Z) - Predicting nucleation near the spinodal in the Ising model using machine
learning [3.5056930099070853]
We use a Convolutional Neural Network (CNN) and two logistic regression models to predict the probability of nucleation in the two-dimensional Ising model.
The CNN outperforms the logistic regression models near the spinodal of the Long Range Ising model, but the accuracy of its predictions decreases as the quenches approach the spinodal.
arXiv Detail & Related papers (2020-04-20T19:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.