Machine Learning Neutrino-Nucleus Cross Sections
- URL: http://arxiv.org/abs/2412.16303v2
- Date: Thu, 08 May 2025 18:00:00 GMT
- Title: Machine Learning Neutrino-Nucleus Cross Sections
- Authors: Daniel C. Hackett, Joshua Isaacson, Shirley Weishi Li, Karla Tame-Narvaez, Michael L. Wagman,
- Abstract summary: We show that an accurate neural-network model of the cross section can be learned from near-detector data.<n>We then perform a neutrino oscillation analysis with simulated far-detector events, finding that the modeled cross section achieves results consistent with what could be obtained if the true cross section were known exactly.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neutrino-nucleus scattering cross sections are critical theoretical inputs for long-baseline neutrino oscillation experiments. However, robust modeling of these cross sections remains challenging. For a simple but physically motivated toy model of the DUNE experiment, we demonstrate that an accurate neural-network model of the cross section -- leveraging Standard Model symmetries -- can be learned from near-detector data. We then perform a neutrino oscillation analysis with simulated far-detector events, finding that the modeled cross section achieves results consistent with what could be obtained if the true cross section were known exactly. This proof-of-principle study highlights the potential of future neutrino near-detector datasets and data-driven cross-section models.
Related papers
- Transfer Learning for Neutrino Scattering: Domain Adaptation with GANs [0.0]
We use transfer learning to extrapolate the physics knowledge encoded in a Generative Adversarial Network (GAN) model trained on synthetic charged-current (CC) neutrino-carbon inclusive scattering data.<n>We also assess the effectiveness of transfer learning in re-optimizing a custom model when new data comes from a different neutrino-nucleus interaction model.
arXiv Detail & Related papers (2025-08-18T15:08:13Z) - Re-optimization of a deep neural network model for electron-carbon scattering using new experimental data [0.0]
We present an updated deep neural network model for inclusive electron-carbon scattering.<n>We incorporate recent experimental data, as well as older measurements in the deep inelastic scattering region.
arXiv Detail & Related papers (2025-08-01T18:05:38Z) - First-principle crosstalk dynamics and Hamiltonian learning via Rabi experiments [8.258634148681306]
We present a description of crosstalk and learn the underlying parameters by executing novel simultaneous Rabi experiments.
We observe excellent agreement between our theoretical predictions and experimental results.
This method provides whole-chip crosstalk characterization, a useful tool for guiding quantum processor design.
arXiv Detail & Related papers (2025-02-07T22:27:41Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Deep Generative Models for Ultra-High Granularity Particle Physics Detector Simulation: A Voyage From Emulation to Extrapolation [0.0]
This thesis aims to overcome this challenge for the Pixel Vertex Detector (PXD) at the Belle II experiment.
This study introduces, for the first time, the results of using deep generative models for ultra-high granularity detector simulation in Particle Physics.
arXiv Detail & Related papers (2024-03-05T23:12:47Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Designing Observables for Measurements with Deep Learning [0.12277343096128711]
We propose to design targeted observables with machine learning.
Unfolded, differential cross sections in a neural network output contain the most information about parameters of interest.
We demonstrate this idea in simulation using two physics models for inclusive measurements in deep in scattering.
arXiv Detail & Related papers (2023-10-12T20:54:34Z) - End-to-end Phase Field Model Discovery Combining Experimentation,
Crowdsourcing, Simulation and Learning [9.763339269757227]
We present Phase-Field-Lab platform for end-to-end phase field model discovery.
Phase-Field-Lab combines (i) a streamlined annotation tool which reduces the annotation time; (ii) an end-to-end neural model which automatically learns phase field models from data; and (iii) novel interfaces and visualizations.
Our platform is deployed in the analysis of nano-structure evolution in materials under extreme conditions.
arXiv Detail & Related papers (2023-09-13T22:44:04Z) - Machine learning enabled experimental design and parameter estimation
for ultrafast spin dynamics [54.172707311728885]
We introduce a methodology that combines machine learning with Bayesian optimal experimental design (BOED)
Our method employs a neural network model for large-scale spin dynamics simulations for precise distribution and utility calculations in BOED.
Our numerical benchmarks demonstrate the superior performance of our method in guiding XPFS experiments, predicting model parameters, and yielding more informative measurements within limited experimental time.
arXiv Detail & Related papers (2023-06-03T06:19:20Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Interpretable Joint Event-Particle Reconstruction for Neutrino Physics
at NOvA with Sparse CNNs and Transformers [124.29621071934693]
We present a novel neural network architecture that combines the spatial learning enabled by convolutions with the contextual learning enabled by attention.
TransformerCVN simultaneously classifies each event and reconstructs every individual particle's identity.
This architecture enables us to perform several interpretability studies which provide insights into the network's predictions.
arXiv Detail & Related papers (2023-03-10T20:36:23Z) - Physics-informed CoKriging model of a redox flow battery [68.8204255655161]
Redox flow batteries (RFBs) offer the capability to store large amounts of energy cheaply and efficiently.
There is a need for fast and accurate models of the charge-discharge curve of a RFB to potentially improve the battery capacity and performance.
We develop a multifidelity model for predicting the charge-discharge curve of a RFB.
arXiv Detail & Related papers (2021-06-17T00:49:55Z) - Learning neural network potentials from experimental data via
Differentiable Trajectory Reweighting [0.0]
Top-down approaches that learn neural network (NN) potentials directly from experimental data have received less attention.
We present the Differentiable Trajectory Reweighting (DiffTRe) method, which bypasses differentiation through the MD simulation for time-independent observables.
We show effectiveness of DiffTRe in learning NN potentials for an atomistic model of diamond and a coarse-grained model of water based on diverse experimental observables.
arXiv Detail & Related papers (2021-06-02T13:10:43Z) - Phase Detection with Neural Networks: Interpreting the Black Box [58.720142291102135]
Neural networks (NNs) usually hinder any insight into the reasoning behind their predictions.
We demonstrate how influence functions can unravel the black box of NN when trained to predict the phases of the one-dimensional extended spinless Fermi-Hubbard model at half-filling.
arXiv Detail & Related papers (2020-04-09T17:45:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.