A Discrete Neural Operator with Adaptive Sampling for Surrogate Modeling of Parametric Transient Darcy Flows in Porous Media
- URL: http://arxiv.org/abs/2512.03113v1
- Date: Tue, 02 Dec 2025 09:32:56 GMT
- Title: A Discrete Neural Operator with Adaptive Sampling for Surrogate Modeling of Parametric Transient Darcy Flows in Porous Media
- Authors: Zhenglong Chen, Zhao Zhang, Xia Yan, Jiayu Zhai, Piyang Liu, Kai Zhang,
- Abstract summary: This study proposes a new discrete neural operator for surrogate modeling of transient flow fields in heterogeneous porous media.<n>New method integrates temporal encoding, operator learning and UNet to approximate the matrices between vector spaces of random parameter and flow fields.<n>Results reveal consistent enhancement in prediction accuracy given limited training set.
- Score: 9.213861489570586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study proposes a new discrete neural operator for surrogate modeling of transient Darcy flow fields in heterogeneous porous media with random parameters. The new method integrates temporal encoding, operator learning and UNet to approximate the mapping between vector spaces of random parameter and spatiotemporal flow fields. The new discrete neural operator can achieve higher prediction accuracy than the SOTA attention-residual-UNet structure. Derived from the finite volume method, the transmissibility matrices rather than permeability is adopted as the inputs of surrogates to enhance the prediction accuracy further. To increase sampling efficiency, a generative latent space adaptive sampling method is developed employing the Gaussian mixture model for density estimation of generalization error. Validation is conducted on test cases of 2D/3D single- and two-phase Darcy flow field prediction. Results reveal consistent enhancement in prediction accuracy given limited training set.
Related papers
- Low-Dimensional Adaptation of Rectified Flow: A New Perspective through the Lens of Diffusion and Stochastic Localization [59.04314685837778]
Rectified flow (RF) has gained considerable popularity due to its generation efficiency and state-of-the-art performance.<n>In this paper, we investigate the degree to which RF automatically adapts to the intrinsic low dimensionality of the support of the target distribution to accelerate sampling.<n>We show that, using a carefully designed choice of the time-discretization scheme and with sufficiently accurate drift estimates, the RF sampler enjoys an complexity of order $O(k/varepsilon)$.
arXiv Detail & Related papers (2026-01-21T22:09:27Z) - Adaptive Sampling for Hydrodynamic Stability [0.0]
The study extends the machine-learning approach of Silvester (Machine Learning for Hydrodynamic Stability, arXiv:2407.09572)<n>The proposed methodology introduces adaptivity through a flow-based deep generative model that automatically refines the sampling of the parameter space.<n>KRnet is trained to approximate a probability density function that concentrates sampling in regions of high entropy.
arXiv Detail & Related papers (2025-12-15T17:00:09Z) - Generative Modeling with Continuous Flows: Sample Complexity of Flow Matching [60.37045080890305]
We provide the first analysis of the sample complexity for flow-matching based generative models.<n>We decompose the velocity field estimation error into neural-network approximation error, statistical error due to the finite sample size, and optimization error due to the finite number of optimization steps for estimating the velocity field.
arXiv Detail & Related papers (2025-12-01T05:14:25Z) - Neural Optimal Transport Meets Multivariate Conformal Prediction [58.43397908730771]
We propose a framework for conditional vectorile regression (CVQR)<n>CVQR combines neural optimal transport with quantized optimization, and apply it to predictions.
arXiv Detail & Related papers (2025-09-29T19:50:19Z) - Surrogate Modelling of Proton Dose with Monte Carlo Dropout Uncertainty Quantification [0.0]
We develop a neural surrogate that integrates Monte Carlo dropout to provide fast, differentiable dose predictions.<n>The approach achieves significant speedups over MC while retaining uncertainty information.<n>It is suitable for integration into robust planning, adaptive replanning and uncertainty-aware optimisation in proton therapy.
arXiv Detail & Related papers (2025-09-16T19:54:49Z) - Adaptive Sampling to Reduce Epistemic Uncertainty Using Prediction Interval-Generation Neural Networks [0.0]
This paper presents an adaptive sampling approach designed to reduce epistemic uncertainty in predictive models.<n>Our primary contribution is the development of a metric that estimates potential epistemic uncertainty.<n>A batch sampling strategy based on Gaussian processes (GPs) is also proposed.<n>We test our approach on three unidimensional synthetic problems and a multi-dimensional dataset based on an agricultural field for selecting experimental fertilizer rates.
arXiv Detail & Related papers (2024-12-13T21:21:47Z) - Characteristic Learning for Provable One Step Generation [12.620728925515012]
We propose a one-step generative model that combines the efficiency of sampling in Generative Adversarial Networks (GANs) with the stable performance of flow-based models.<n>Our model is driven by characteristics, along which the probability density transport can be described by ordinary differential equations (ODEs)<n>A deep neural network is then trained to fit these characteristics, creating a one-step map that pushes a simple Gaussian distribution to the target distribution.
arXiv Detail & Related papers (2024-05-09T02:41:42Z) - Exploiting Diffusion Prior for Generalizable Dense Prediction [85.4563592053464]
Recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate.
We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks.
Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-11-30T18:59:44Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.