A reduced-order derivative-informed neural operator for subsurface fluid-flow
- URL: http://arxiv.org/abs/2509.13620v1
- Date: Wed, 17 Sep 2025 01:30:44 GMT
- Title: A reduced-order derivative-informed neural operator for subsurface fluid-flow
- Authors: Jeongjin, Park, Grant Bruer, Huseyin Tuna Erdinc, Abhinav Prakash Gahlot, Felix J. Herrmann,
- Abstract summary: We propose a reduced-order, derivative-informed training framework for neural operators.<n>DeFINO captures sensitivity information directly informed by observational data.<n>We demonstrate improvements in gradient accuracy while maintaining robust forward predictions of underlying fluid dynamics.
- Score: 0.21585047554218337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural operators have emerged as cost-effective surrogates for expensive fluid-flow simulators, particularly in computationally intensive tasks such as permeability inversion from time-lapse seismic data, and uncertainty quantification. In these applications, the fidelity of the surrogate's gradients with respect to system parameters is crucial, as the accuracy of downstream tasks, such as optimization and Bayesian inference, relies directly on the quality of the derivative information. Recent advances in physics-informed methods have leveraged derivative information to improve surrogate accuracy. However, incorporating explicit Jacobians can become computationally prohibitive, as the complexity typically scales quadratically with the number of input parameters. To address this limitation, we propose DeFINO (Derivative-based Fisher-score Informed Neural Operator), a reduced-order, derivative-informed training framework. DeFINO integrates Fourier neural operators (FNOs) with a novel derivative-based training strategy guided by the Fisher Information Matrix (FIM). By projecting Jacobians onto dominant eigen-directions identified by the FIM, DeFINO captures critical sensitivity information directly informed by observational data, significantly reducing computational expense. We validate DeFINO through synthetic experiments in the context of subsurface multi-phase fluid-flow, demonstrating improvements in gradient accuracy while maintaining robust forward predictions of underlying fluid dynamics. These results highlight DeFINO's potential to offer practical, scalable solutions for inversion problems in complex real-world scenarios, all at substantially reduced computational cost.
Related papers
- Fluids You Can Trust: Property-Preserving Operator Learning for Incompressible Flows [3.8498574327875947]
We present a novel property-preserving kernel-based operator learning method for incompressible flows governed by the incompressible Navier-Stokes equations.
arXiv Detail & Related papers (2026-02-17T10:20:46Z) - Flow matching Operators for Residual-Augmented Probabilistic Learning of Partial Differential Equations [0.5729426778193397]
We formulate flow matching in an infinite-dimensional function space to learn a probabilistic transport.<n>We develop a conditional neural operator architecture based on feature-wise linear modulation for flow matching vector fields.<n>We show that the proposed method can accurately learn solution operators across different resolutions and fidelities.
arXiv Detail & Related papers (2025-12-14T16:06:10Z) - Efficient Parametric SVD of Koopman Operator for Stochastic Dynamical Systems [51.54065545849027]
The Koopman operator provides a principled framework for analyzing nonlinear dynamical systems.<n>VAMPnet and DPNet have been proposed to learn the leading singular subspaces of the Koopman operator.<n>We propose a scalable and conceptually simple method for learning the top-$k$ singular functions of the Koopman operator.
arXiv Detail & Related papers (2025-07-09T18:55:48Z) - OmniFluids: Physics Pre-trained Modeling of Fluid Dynamics [25.066485418709114]
We propose OmniFluids, a pure physics pre-trained model that captures fundamental fluid dynamics laws and adapts efficiently to diverse downstream tasks.<n>We develop a training framework combining physics-only pre-training, coarse-grid operator distillation, and few-shot fine-tuning.<n>Tests show that OmniFluids outperforms state-of-the-art AI-driven methods in terms of flow field prediction and statistics.
arXiv Detail & Related papers (2025-06-12T16:23:02Z) - LaPON: A Lagrange's-mean-value-theorem-inspired operator network for solving PDEs and its application on NSE [8.014720523981385]
We propose LaPON, an operator network inspired by the Lagrange's mean value theorem.<n>It embeds prior knowledge directly into the neural network architecture instead of the loss function.<n>LaPON provides a scalable and reliable solution for high-fidelity fluid dynamics simulation.
arXiv Detail & Related papers (2025-05-18T10:45:17Z) - Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.<n>Non-smooth regularization is often incorporated into machine learning tasks.<n>We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems [49.819436680336786]
We propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems.<n>Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive implicit process prior that captures complex, non-stationary transition dynamics.<n>Our ETGPSSM outperforms existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy.
arXiv Detail & Related papers (2025-03-24T03:19:45Z) - Using Parametric PINNs for Predicting Internal and External Turbulent Flows [6.387263468033964]
We build upon the previously proposed RANS-PINN framework, which only focused on predicting flow over a cylinder.
We investigate its accuracy in predicting relevant turbulent flow variables for both internal and external flows.
arXiv Detail & Related papers (2024-10-24T17:08:20Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Derivative-Informed Neural Operator: An Efficient Framework for
High-Dimensional Parametric Derivative Learning [3.7051887945349518]
We propose derivative-informed neural operators (DINOs)
DINOs approximate operators as infinite-dimensional mappings from input function spaces to output function spaces or quantities of interest.
We show that the proposed DINO achieves significantly higher accuracy than neural operators trained without derivative information.
arXiv Detail & Related papers (2022-06-21T21:40:01Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Physics-aware deep neural networks for surrogate modeling of turbulent
natural convection [0.0]
We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-B'enard convection flows.
We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs.
The predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% -- 4%] in the relative L 2 norm.
arXiv Detail & Related papers (2021-03-05T09:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.