DeepONet Augmented by Randomized Neural Networks for Efficient Operator Learning in PDEs
- URL: http://arxiv.org/abs/2503.00317v1
- Date: Sat, 01 Mar 2025 03:05:29 GMT
- Title: DeepONet Augmented by Randomized Neural Networks for Efficient Operator Learning in PDEs
- Authors: Zhaoxi Jiang, Fei Wang,
- Abstract summary: We propose RaNN-DeepONets, a hybrid architecture designed to balance accuracy and efficiency.<n>RaNN-DeepONets achieves comparable accuracy while reducing computational costs by orders of magnitude.<n>These results highlight the potential of RaNN-DeepONets as an efficient alternative for operator learning in PDE-based systems.
- Score: 5.84093922354671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep operator networks (DeepONets) represent a powerful class of data-driven methods for operator learning, demonstrating strong approximation capabilities for a wide range of linear and nonlinear operators. They have shown promising performance in learning operators that govern partial differential equations (PDEs), including diffusion-reaction systems and Burgers' equations. However, the accuracy of DeepONets is often constrained by computational limitations and optimization challenges inherent in training deep neural networks. Furthermore, the computational cost associated with training these networks is typically very high. To address these challenges, we leverage randomized neural networks (RaNNs), in which the parameters of the hidden layers remain fixed following random initialization. RaNNs compute the output layer parameters using the least-squares method, significantly reducing training time and mitigating optimization errors. In this work, we integrate DeepONets with RaNNs to propose RaNN-DeepONets, a hybrid architecture designed to balance accuracy and efficiency. Furthermore, to mitigate the need for extensive data preparation, we introduce the concept of physics-informed RaNN-DeepONets. Instead of relying on data generated through other time-consuming numerical methods, we incorporate PDE information directly into the training process. We evaluate the proposed model on three benchmark PDE problems: diffusion-reaction dynamics, Burgers' equation, and the Darcy flow problem. Through these tests, we assess its ability to learn nonlinear operators with varying input types. When compared to the standard DeepONet framework, RaNN-DeepONets achieves comparable accuracy while reducing computational costs by orders of magnitude. These results highlight the potential of RaNN-DeepONets as an efficient alternative for operator learning in PDE-based systems.
Related papers
- RandONet: Shallow-Networks with Random Projections for learning linear and nonlinear operators [0.0]
We present Random Projection-based Operator Networks (RandONets)
RandONets are shallow networks with random projections that learn linear and nonlinear operators.
We show, that for this particular task, RandONets outperform, both in terms of numerical approximation accuracy and computational cost, the vanilla" DeepOnets.
arXiv Detail & Related papers (2024-06-08T13:20:48Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Learning in latent spaces improves the predictive accuracy of deep
neural operators [0.0]
L-DeepONet is an extension of standard DeepONet, which leverages latent representations of high-dimensional PDE input and output functions identified with suitable autoencoders.
We show that L-DeepONet outperforms the standard approach in terms of both accuracy and computational efficiency across diverse time-dependent PDEs.
arXiv Detail & Related papers (2023-04-15T17:13:09Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Reliable extrapolation of deep neural operators informed by physics or
sparse observations [2.887258133992338]
Deep neural operators can learn nonlinear mappings between infinite-dimensional function spaces via deep neural networks.
DeepONets provide a new simulation paradigm in science and engineering.
We propose five reliable learning methods that guarantee a safe prediction under extrapolation.
arXiv Detail & Related papers (2022-12-13T03:02:46Z) - DOSnet as a Non-Black-Box PDE Solver: When Deep Learning Meets Operator
Splitting [12.655884541938656]
We develop a learning-based PDE solver, which we name Deep Operator-Splitting Network (DOSnet)
DOSnet is constructed from the physical rules and operators governing the underlying dynamics contains learnable parameters.
We train and validate it on several types of operator-decomposable differential equations.
arXiv Detail & Related papers (2022-12-11T18:23:56Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - LordNet: An Efficient Neural Network for Learning to Solve Parametric Partial Differential Equations without Simulated Data [47.49194807524502]
We propose LordNet, a tunable and efficient neural network for modeling entanglements.
The experiments on solving Poisson's equation and (2D and 3D) Navier-Stokes equation demonstrate that the long-range entanglements can be well modeled by the LordNet.
arXiv Detail & Related papers (2022-06-19T14:41:08Z) - Improved architectures and training algorithms for deep operator
networks [0.0]
Operator learning techniques have emerged as a powerful tool for learning maps between infinite-dimensional Banach spaces.
We analyze the training dynamics of deep operator networks (DeepONets) through the lens of Neural Tangent Kernel (NTK) theory.
arXiv Detail & Related papers (2021-10-04T18:34:41Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.