Chaos into Order: Neural Framework for Expected Value Estimation of Stochastic Partial Differential Equations
- URL: http://arxiv.org/abs/2502.03670v1
- Date: Wed, 05 Feb 2025 23:27:28 GMT
- Title: Chaos into Order: Neural Framework for Expected Value Estimation of Stochastic Partial Differential Equations
- Authors: Ísak Pétursson, María Óskarsdóttir,
- Abstract summary: We introduce a novel neural framework for SPDE estimation that eliminates the need for discretization and explicitly modeling uncertainty.
This is the first neural framework capable directly estimating the expected values of SPDEs in an entirely non-discretized manner, offering a step forward in scientific computing.
Our findings highlight the immense potential of neural-based SPDE solvers, particularly for high-dimensional problems where conventional techniques falter.
- Score: 0.9944647907864256
- License:
- Abstract: Stochastic Partial Differential Equations (SPDEs) are fundamental to modeling complex systems in physics, finance, and engineering, yet their numerical estimation remains a formidable challenge. Traditional methods rely on discretization, introducing computational inefficiencies, and limiting applicability in high-dimensional settings. In this work, we introduce a novel neural framework for SPDE estimation that eliminates the need for discretization, enabling direct estimation of expected values across arbitrary spatio-temporal points. We develop and compare two distinct neural architectures: Loss Enforced Conditions (LEC), which integrates physical constraints into the loss function, and Model Enforced Conditions (MEC), which embeds these constraints directly into the network structure. Through extensive experiments on the stochastic heat equation, Burgers' equation, and Kardar-Parisi-Zhang (KPZ) equation, we reveal a trade-off: While LEC achieves superior residual minimization and generalization, MEC enforces initial conditions with absolute precision and exceptionally high accuracy in boundary condition enforcement. Our findings highlight the immense potential of neural-based SPDE solvers, particularly for high-dimensional problems where conventional techniques falter. By circumventing discretization and explicitly modeling uncertainty, our approach opens new avenues for solving SPDEs in fields ranging from quantitative finance to turbulence modeling. To the best of our knowledge, this is the first neural framework capable of directly estimating the expected values of SPDEs in an entirely non-discretized manner, offering a step forward in scientific computing.
Related papers
- Probabilistic neural operators for functional uncertainty quantification [14.08907045605149]
We introduce the probabilistic neural operator (PNO), a framework for learning probability distributions over the output function space of neural operators.
PNO extends neural operators with generative modeling based on strictly proper scoring rules, integrating uncertainty information directly into the training process.
arXiv Detail & Related papers (2025-02-18T14:42:11Z) - Neural variational Data Assimilation with Uncertainty Quantification using SPDE priors [28.804041716140194]
Recent advances in the deep learning community enables to address the problem through a neural architecture a variational data assimilation framework.
In this work we use the theory of Partial Differential Equations (SPDE) and Gaussian Processes (GP) to estimate both space-and time covariance of the state.
arXiv Detail & Related papers (2024-02-02T19:18:12Z) - Efficient Neural PDE-Solvers using Quantization Aware Training [71.0934372968972]
We show that quantization can successfully lower the computational cost of inference while maintaining performance.
Our results on four standard PDE datasets and three network architectures show that quantization-aware training works across settings and three orders of FLOPs magnitudes.
arXiv Detail & Related papers (2023-08-14T09:21:19Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Generalized Neural Closure Models with Interpretability [28.269731698116257]
We develop a novel and versatile methodology of unified neural partial delay differential equations.
We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations.
We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models.
arXiv Detail & Related papers (2023-01-15T21:57:43Z) - Deep Learning Aided Laplace Based Bayesian Inference for Epidemiological
Systems [2.596903831934905]
We propose a hybrid approach where Laplace-based Bayesian inference is combined with an ANN architecture for obtaining approximations to the ODE trajectories.
The effectiveness of our proposed methods is demonstrated using an epidemiological system with non-analytical solutions, the Susceptible-Infectious-Removed (SIR) model for infectious diseases.
arXiv Detail & Related papers (2022-10-17T09:02:41Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Distributional Gradient Matching for Learning Uncertain Neural Dynamics
Models [38.17499046781131]
We propose a novel approach towards estimating uncertain neural ODEs, avoiding the numerical integration bottleneck.
Our algorithm - distributional gradient matching (DGM) - jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss.
Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
arXiv Detail & Related papers (2021-06-22T08:40:51Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.