FB-HyDON: Parameter-Efficient Physics-Informed Operator Learning of Complex PDEs via Hypernetwork and Finite Basis Domain Decomposition
- URL: http://arxiv.org/abs/2409.09207v1
- Date: Fri, 13 Sep 2024 21:41:59 GMT
- Title: FB-HyDON: Parameter-Efficient Physics-Informed Operator Learning of Complex PDEs via Hypernetwork and Finite Basis Domain Decomposition
- Authors: Milad Ramezankhani, Rishi Yash Parekh, Anirudh Deodhar, Dagnachew Birru,
- Abstract summary: Deep operator networks (DeepONet) and neural operators have gained significant attention for their ability to map infinite-dimensional function spaces and perform zero-shot super-resolution.
We introduce Finite Basis Physics-Informed HyperDeepONet (FB-HyDON), an advanced operator architecture featuring intrinsic domain decomposition.
By leveraging hypernetworks and finite basis functions, FB-HyDON effectively mitigates the training limitations associated with existing physics-informed operator learning methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep operator networks (DeepONet) and neural operators have gained significant attention for their ability to map infinite-dimensional function spaces and perform zero-shot super-resolution. However, these models often require large datasets for effective training. While physics-informed operators offer a data-agnostic learning approach, they introduce additional training complexities and convergence issues, especially in highly nonlinear systems. To overcome these challenges, we introduce Finite Basis Physics-Informed HyperDeepONet (FB-HyDON), an advanced operator architecture featuring intrinsic domain decomposition. By leveraging hypernetworks and finite basis functions, FB-HyDON effectively mitigates the training limitations associated with existing physics-informed operator learning methods. We validated our approach on the high-frequency harmonic oscillator, Burgers' equation at different viscosity levels, and Allen-Cahn equation demonstrating substantial improvements over other operator learning models.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - An efficient wavelet-based physics-informed neural networks for singularly perturbed problems [0.0]
Physics-informed neural networks (PINNs) are a class of deep learning models that utilize physics as differential equations.
We present an efficient wavelet-based PINNs model to solve singularly perturbed differential equations.
The architecture allows the training process to search for a solution within wavelet space, making the process faster and more accurate.
arXiv Detail & Related papers (2024-09-18T10:01:37Z) - An Advanced Physics-Informed Neural Operator for Comprehensive Design Optimization of Highly-Nonlinear Systems: An Aerospace Composites Processing Case Study [0.0]
This paper introduces an advanced physics-informed DeepONet tailored for such complex systems with multiple input functions.
The proposed model handles high-dimensional design spaces with significantly improved accuracy, outperforming the vanilla physics-informed DeepONet by two orders of magnitude.
arXiv Detail & Related papers (2024-06-20T20:19:30Z) - Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs [85.40198664108624]
We propose Codomain Attention Neural Operator (CoDA-NO) to solve multiphysics problems with PDEs.
CoDA-NO tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems.
We find CoDA-NO to outperform existing methods by over 36% on complex downstream tasks with limited data.
arXiv Detail & Related papers (2024-03-19T08:56:20Z) - Learning Generic Solutions for Multiphase Transport in Porous Media via
the Flux Functions Operator [0.0]
DeepDeepONet has emerged as a powerful tool for accelerating rendering fluxDEs.
We use Physics-In DeepONets (PI-DeepONets) to achieve this mapping without any input paired-output observations.
arXiv Detail & Related papers (2023-07-03T21:10:30Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Mitigating spectral bias for the multiscale operator learning [14.404769413313371]
We propose a hierarchical attention neural operator (HANO) inspired by the hierarchical matrix approach.
HANO features a scale-adaptive interaction range and self-attentions over a hierarchy of levels, enabling nested feature computation with controllable linear cost.
Our numerical experiments demonstrate that HANO outperforms state-of-the-art (SOTA) methods for representative multiscale problems.
arXiv Detail & Related papers (2022-10-19T21:09:29Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Improved architectures and training algorithms for deep operator
networks [0.0]
Operator learning techniques have emerged as a powerful tool for learning maps between infinite-dimensional Banach spaces.
We analyze the training dynamics of deep operator networks (DeepONets) through the lens of Neural Tangent Kernel (NTK) theory.
arXiv Detail & Related papers (2021-10-04T18:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.