Physics-Informed Time-Integrated DeepONet: Temporal Tangent Space Operator Learning for High-Accuracy Inference
- URL: http://arxiv.org/abs/2508.05190v1
- Date: Thu, 07 Aug 2025 09:25:52 GMT
- Title: Physics-Informed Time-Integrated DeepONet: Temporal Tangent Space Operator Learning for High-Accuracy Inference
- Authors: Luis Mandl, Dibyajyoti Nayak, Tim Ricken, Somdatta Goswami,
- Abstract summary: We introduce a dual-dimensional architecture trained via fully physics-informed or hybrid physics- and data-driven objectives.<n>Instead of forecasting future states, the network learns the time-derivative operator from the current state, integrating it using classical time-stepping schemes.<n>Applying to benchmark problems, PITI-DeepONet shows improved accuracy over extended time horizons when compared to traditional methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurately modeling and inferring solutions to time-dependent partial differential equations (PDEs) over extended horizons remains a core challenge in scientific machine learning. Traditional full rollout (FR) methods, which predict entire trajectories in one pass, often fail to capture the causal dependencies and generalize poorly outside the training time horizon. Autoregressive (AR) approaches, evolving the system step by step, suffer from error accumulation, limiting long-term accuracy. These shortcomings limit the long-term accuracy and reliability of both strategies. To address these issues, we introduce the Physics-Informed Time-Integrated Deep Operator Network (PITI-DeepONet), a dual-output architecture trained via fully physics-informed or hybrid physics- and data-driven objectives to ensure stable, accurate long-term evolution well beyond the training horizon. Instead of forecasting future states, the network learns the time-derivative operator from the current state, integrating it using classical time-stepping schemes to advance the solution in time. Additionally, the framework can leverage residual monitoring during inference to estimate prediction quality and detect when the system transitions outside the training domain. Applied to benchmark problems, PITI-DeepONet shows improved accuracy over extended inference time horizons when compared to traditional methods. Mean relative $\mathcal{L}_2$ errors reduced by 84% (vs. FR) and 79% (vs. AR) for the one-dimensional heat equation; by 87% (vs. FR) and 98% (vs. AR) for the one-dimensional Burgers equation; and by 42% (vs. FR) and 89% (vs. AR) for the two-dimensional Allen-Cahn equation. By moving beyond classic FR and AR schemes, PITI-DeepONet paves the way for more reliable, long-term integration of complex, time-dependent PDEs.
Related papers
- A deep solver for backward stochastic Volterra integral equations [44.99833362998488]
We present the first deep-learning solver for backward Volterra integral equations (BSVIEs)<n>The method trains a neural network to approximate the two solution fields in a single stage.<n>These results open practical access to a family of high-dimensional, path-dependent problems in control and quantitative finance.
arXiv Detail & Related papers (2025-05-23T18:41:54Z) - TI-DeepONet: Learnable Time Integration for Stable Long-Term Extrapolation [0.0]
TI-DeepONet is a framework that integrates neural operators with adaptive numerical time-stepping techniques.<n>This research establishes a physics-aware operator learning paradigm that bridges neural approximation with numerical analysis.
arXiv Detail & Related papers (2025-05-22T23:24:31Z) - Physics-informed Reduced Order Modeling of Time-dependent PDEs via Differentiable Solvers [0.0]
We propose Physics-informed ROM ($Phi$-ROM) by incorporating differentiable PDE solvers into the training procedure.<n>Specifically, the latent space dynamics and its dependence on PDE parameters are shaped directly by the governing physics encoded in the solver.<n>Our model outperforms state-of-the-art data-driven ROMs and other physics-informed strategies by accurately generalizing to new dynamics arising from unseen parameters.
arXiv Detail & Related papers (2025-05-20T16:47:04Z) - Enabling Automatic Differentiation with Mollified Graph Neural Operators [75.3183193262225]
We propose the mollified graph neural operator (mGNO), the first method to leverage automatic differentiation and compute emphexact gradients on arbitrary geometries.<n>For a PDE example on regular grids, mGNO paired with autograd reduced the L2 relative data error by 20x compared to finite differences.<n>It can also solve PDEs on unstructured point clouds seamlessly, using physics losses only, at resolutions vastly lower than those needed for finite differences to be accurate enough.
arXiv Detail & Related papers (2025-04-11T06:16:30Z) - MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)<n>In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.<n>A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Physics-Informed Latent Neural Operator for Real-time Predictions of Complex Physical Systems [0.0]
We propose PI-Latent-NO, a physics-informed latent neural operator framework that integrates governing physics directly into the learning process.<n>Our architecture features two coupled DeepONets trained end-to-end: a Latent-DeepONet that learns a low-dimensional representation of the solution, and a Reconstruction-DeepONet that maps this latent representation back to the physical space.
arXiv Detail & Related papers (2025-01-14T20:38:30Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Temporal Subsampling Diminishes Small Spatial Scales in Recurrent Neural
Network Emulators of Geophysical Turbulence [0.0]
We investigate how an often overlooked processing step affects the quality of an emulator's predictions.
We implement ML architectures from a class of methods called reservoir computing: (1) a form of spatial Vector Autoregression (N VAR), and (2) an Echo State Network (ESN)
In all cases, subsampling the training data consistently leads to an increased bias at small scales that resembles numerical diffusion.
arXiv Detail & Related papers (2023-04-28T21:34:53Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Learning to Accelerate Partial Differential Equations via Latent Global
Evolution [64.72624347511498]
Latent Evolution of PDEs (LE-PDE) is a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs.
We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability.
We demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-15T17:31:24Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.