Micro-Macro Tensor Neural Surrogates for Uncertainty Quantification in Collisional Plasma
- URL: http://arxiv.org/abs/2512.24205v1
- Date: Tue, 30 Dec 2025 13:07:35 GMT
- Title: Micro-Macro Tensor Neural Surrogates for Uncertainty Quantification in Collisional Plasma
- Authors: Wei Chen, Giacomo Dimarco, Lorenzo Pareschi,
- Abstract summary: Plasma equations exhibit pronounced sensitivity to microscopic perturbations in model parameters and data.<n>Cost of uncertainty sampling, the high-dimensional phase space, and multiscale stiffness pose severe challenges to both computational efficiency and error control.<n>We present a variance-reduced Monte Carlo framework for UQ in which neural network surrogates replace the costly evaluations of the Landau collision term.
- Score: 3.7863228436382013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plasma kinetic equations exhibit pronounced sensitivity to microscopic perturbations in model parameters and data, making reliable and efficient uncertainty quantification (UQ) essential for predictive simulations. However, the cost of uncertainty sampling, the high-dimensional phase space, and multiscale stiffness pose severe challenges to both computational efficiency and error control in traditional numerical methods. These aspects are further emphasized in presence of collisions where the high-dimensional nonlocal collision integrations and conservation properties pose severe constraints. To overcome this, we present a variance-reduced Monte Carlo framework for UQ in the Vlasov--Poisson--Landau (VPL) system, in which neural network surrogates replace the multiple costly evaluations of the Landau collision term. The method couples a high-fidelity, asymptotic-preserving VPL solver with inexpensive, strongly correlated surrogates based on the Vlasov--Poisson--Fokker--Planck (VPFP) and Euler--Poisson (EP) equations. For the surrogate models, we introduce a generalization of the separable physics-informed neural network (SPINN), developing a class of tensor neural networks based on an anisotropic micro-macro decomposition, to reduce velocity-moment costs, model complexity, and the curse of dimensionality. To further increase correlation with VPL, we calibrate the VPFP model and design an asymptotic-preserving SPINN whose small- and large-Knudsen limits recover the EP and VP systems, respectively. Numerical experiments show substantial variance reduction over standard Monte Carlo, accurate statistics with far fewer high-fidelity samples, and lower wall-clock time, while maintaining robustness to stochastic dimension.
Related papers
- Multi-resolution Physics-Aware Recurrent Convolutional Neural Network for Complex Flows [2.7233737247962786]
MRPARCv2 is designed to model complex flows by embedding the structure of advection-diffusion-reaction equations.<n>We evaluate the model on a challenging 2D turbulent radiative layer dataset from The Well multi-physics benchmark repository.
arXiv Detail & Related papers (2025-12-04T16:19:10Z) - Generative Modeling with Continuous Flows: Sample Complexity of Flow Matching [60.37045080890305]
We provide the first analysis of the sample complexity for flow-matching based generative models.<n>We decompose the velocity field estimation error into neural-network approximation error, statistical error due to the finite sample size, and optimization error due to the finite number of optimization steps for estimating the velocity field.
arXiv Detail & Related papers (2025-12-01T05:14:25Z) - MPQ-DMv2: Flexible Residual Mixed Precision Quantization for Low-Bit Diffusion Models with Temporal Distillation [74.34220141721231]
We present MPQ-DMv2, an improved textbfMixed textbfPrecision textbfQuantization framework for extremely low-bit textbfDiffusion textbfModels.
arXiv Detail & Related papers (2025-07-06T08:16:50Z) - Structure and asymptotic preserving deep neural surrogates for uncertainty quantification in multiscale kinetic equations [5.181697052513637]
High dimensionality of kinetic equations with parameters poses computational challenges for uncertainty quantification (UQ)<n>Traditional Monte Carlo (MC) sampling methods suffer from slow convergence and high variance, which become increasingly severe as the dimensionality of the space grows.<n>We introduce surrogate models based on structure and preserving neural networks (SAPNNs)<n>SAPNNs are specifically designed to satisfy key physical properties, including positivity, conservation laws, entropy dissipation, parameter limits.
arXiv Detail & Related papers (2025-06-12T12:20:53Z) - KITINet: Kinetics Theory Inspired Network Architectures with PDE Simulation Approaches [43.872190335490515]
This paper introduces KITINet, a novel architecture that reinterprets feature propagation through the lens of non-equilibrium particle dynamics.<n>At its core, we propose a residual module that models update as the evolution of a particle system.<n>This formulation mimics particle collisions and energy exchange, enabling adaptive feature refinement via physics-informed interactions.
arXiv Detail & Related papers (2025-05-23T13:58:29Z) - Uncertainty Quantification for Multi-fidelity Simulations [0.0]
The work focuses on gathering high-fidelity and low-fidelity numerical simulations data using Nektar++ and XFOIL respectively.<n>The utilization of the higher distribution in calculating the Coefficient of lift and drag has demonstrated superior accuracy and precision.<n>To minimize the reliability on high-fidelity numerical simulations in Uncertainty Quantification, a multi-fidelity strategy has been adopted.
arXiv Detail & Related papers (2025-03-11T13:11:18Z) - Ensemble models outperform single model uncertainties and predictions
for operator-learning of hypersonic flows [43.148818844265236]
Training scientific machine learning (SciML) models on limited high-fidelity data offers one approach to rapidly predict behaviors for situations that have not been seen before.
High-fidelity data is itself in limited quantity to validate all outputs of the SciML model in unexplored input space.
We extend a DeepONet using three different uncertainty mechanisms: mean-variance estimation, evidential uncertainty, and ensembling.
arXiv Detail & Related papers (2023-10-31T18:07:29Z) - Multi-fidelity reduced-order surrogate modeling [5.346062841242067]
We present a new data-driven strategy that combines dimensionality reduction with multi-fidelity neural network surrogates.
We show that the onset of instabilities and transients are well captured by this surrogate technique.
arXiv Detail & Related papers (2023-09-01T08:16:53Z) - Auto-weighted Bayesian Physics-Informed Neural Networks and robust estimations for multitask inverse problems in pore-scale imaging of dissolution [0.0]
We present a novel data assimilation strategy in pore-scale imaging.
We demonstrate that this makes it possible to robustly address reactive inverse problems incorporating Uncertainty Quantification.
arXiv Detail & Related papers (2023-08-24T15:39:01Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - On Fast Simulation of Dynamical System with Neural Vector Enhanced
Numerical Solver [59.13397937903832]
We introduce a deep learning-based corrector called Neural Vector (NeurVec)
NeurVec can compensate for integration errors and enable larger time step sizes in simulations.
Our experiments on a variety of complex dynamical system benchmarks demonstrate that NeurVec exhibits remarkable generalization capability.
arXiv Detail & Related papers (2022-08-07T09:02:18Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.