Accelerating Data Generation for Nonlinear temporal PDEs via homologous perturbation in solution space
- URL: http://arxiv.org/abs/2510.21592v2
- Date: Fri, 31 Oct 2025 07:51:12 GMT
- Title: Accelerating Data Generation for Nonlinear temporal PDEs via homologous perturbation in solution space
- Authors: Lei Liu, Zhenxin Huang, Hong Wang, huanshuo dong, Haiyang Xin, Hongwei Zhao, Bin Li,
- Abstract summary: We propose a novel data generation algorithm, called HOmologous Perturbation in Solution Space (HOPSS)<n>HOPSS directly generates training datasets with fewer time steps rather than following the traditional approach of generating large time steps datasets.<n>For example, on the Navier-Stokes equation, it generates 10,000 samples in approximately 10% of traditional methods' time, with comparable model training performance.
- Score: 14.26905095515366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven deep learning methods like neural operators have advanced in solving nonlinear temporal partial differential equations (PDEs). However, these methods require large quantities of solution pairs\u2014the solution functions and right-hand sides (RHS) of the equations. These pairs are typically generated via traditional numerical methods, which need thousands of time steps iterations far more than the dozens required for training, creating heavy computational and temporal overheads. To address these challenges, we propose a novel data generation algorithm, called HOmologous Perturbation in Solution Space (HOPSS), which directly generates training datasets with fewer time steps rather than following the traditional approach of generating large time steps datasets. This algorithm simultaneously accelerates dataset generation and preserves the approximate precision required for model training. Specifically, we first obtain a set of base solution functions from a reliable solver, usually with thousands of time steps, and then align them in time steps with training datasets by downsampling. Subsequently, we propose a "homologous perturbation" approach: by combining two solution functions (one as the primary function, the other as a homologous perturbation term scaled by a small scalar) with random noise, we efficiently generate comparable-precision PDE data points. Finally, using these data points, we compute the variation in the original equation's RHS to form new solution pairs. Theoretical and experimental results show HOPSS lowers time complexity. For example, on the Navier-Stokes equation, it generates 10,000 samples in approximately 10% of traditional methods' time, with comparable model training performance.
Related papers
- An adjoint method for training data-driven reduced-order models [0.22940141855172028]
We propose a training framework that couples a continuous-time form of operator inference with the adjoint-state method to obtain robust data-driven reduced-order models.<n>We perform systematic comparisons against standard operator inference under two perturbation regimes, namely reduced temporal snapshot density and additive Gaussian noise.
arXiv Detail & Related papers (2026-01-12T14:30:50Z) - MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)<n>In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.<n>A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference [55.150117654242706]
We show that model selection for computation-aware GPs trained on 1.8 million data points can be done within a few hours on a single GPU.<n>As a result of this work, Gaussian processes can be trained on large-scale datasets without significantly compromising their ability to quantify uncertainty.
arXiv Detail & Related papers (2024-11-01T21:11:48Z) - Fast training of accurate physics-informed neural networks without gradient descent [4.411766183442036]
We present Frozen-PINN, a novel PINN based on the principle of space-time separation.<n>On eight PDE benchmarks, Frozen-PINNs achieve superior training efficiency and accuracy over state-of-the-art PINNs.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - Accelerating PDE Data Generation via Differential Operator Action in Solution Space [5.801585720878268]
We propose a novel PDE dataset generation algorithm, namely Differential Operator Action in Solution space (DiffOAS)
DiffOAS obtains a few basic PDE solutions and then combines them to get solutions.
It applies differential operators on these solutions, a process we call 'operator action', to efficiently generate precise PDE data points.
Experimental results show that DiffOAS accelerates the generation of large-scale datasets with 10,000 instances by 300 times.
arXiv Detail & Related papers (2024-02-04T17:28:27Z) - Generating synthetic data for neural operators [0.0]
We introduce a "backward" data generation method that avoids solving the PDE numerically.<n>This produces training pairs $(f_j, u_j)$ by computing derivatives rather than solving a PDE numerically for each data point.<n>Experiments indicate that models trained on this synthetic data generalize well when tested on data produced by standard solvers.
arXiv Detail & Related papers (2024-01-04T18:31:21Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Deep-learning of Parametric Partial Differential Equations from Sparse
and Noisy Data [2.4431531175170362]
In this work, a new framework, which combines neural network, genetic algorithm and adaptive methods, is put forward to address all of these challenges simultaneously.
A trained neural network is utilized to calculate derivatives and generate a large amount of meta-data, which solves the problem of sparse noisy data.
Next, genetic algorithm is utilized to discover the form of PDEs and corresponding coefficients with an incomplete candidate library.
A two-step adaptive method is introduced to discover parametric PDEs with spatially- or temporally-varying coefficients.
arXiv Detail & Related papers (2020-05-16T09:09:57Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.