Deciphering and integrating invariants for neural operator learning with
various physical mechanisms
- URL: http://arxiv.org/abs/2311.14361v2
- Date: Mon, 12 Feb 2024 14:45:16 GMT
- Title: Deciphering and integrating invariants for neural operator learning with
various physical mechanisms
- Authors: Rui Zhang, Qi Meng, Zhi-Ming Ma
- Abstract summary: We propose Physical Invariant Attention Neural Operator (PIANO) to decipher and integrate the physical invariants (PI) for operator learning from the PDE series with various physical mechanisms.
Compared to existing techniques, PIANO can reduce the relative error by 13.6%-82.2% on PDE forecasting tasks across varying coefficients, forces, or boundary conditions.
- Score: 22.508244510177683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural operators have been explored as surrogate models for simulating
physical systems to overcome the limitations of traditional partial
differential equation (PDE) solvers. However, most existing operator learning
methods assume that the data originate from a single physical mechanism,
limiting their applicability and performance in more realistic scenarios. To
this end, we propose Physical Invariant Attention Neural Operator (PIANO) to
decipher and integrate the physical invariants (PI) for operator learning from
the PDE series with various physical mechanisms. PIANO employs self-supervised
learning to extract physical knowledge and attention mechanisms to integrate
them into dynamic convolutional layers. Compared to existing techniques, PIANO
can reduce the relative error by 13.6\%-82.2\% on PDE forecasting tasks across
varying coefficients, forces, or boundary conditions. Additionally, varied
downstream tasks reveal that the PI embeddings deciphered by PIANO align well
with the underlying invariants in the PDE systems, verifying the physical
significance of PIANO. The source code will be publicly available at:
https://github.com/optray/PIANO.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Disentangled Representation Learning for Parametric Partial Differential Equations [31.240283037552427]
We propose a new paradigm for learning disentangled representations from neural operator parameters.
DisentangO is a novel hyper-neural operator architecture designed to unveil and disentangle the latent physical factors of variation embedded within the black-box neural operator parameters.
We show that DisentangO effectively extracts meaningful and interpretable latent features, bridging the divide between predictive performance and physical understanding in neural operator frameworks.
arXiv Detail & Related papers (2024-10-03T01:40:39Z) - Nonlocal Attention Operator: Materializing Hidden Knowledge Towards Interpretable Physics Discovery [25.75410883895742]
We propose a novel neural operator architecture based on the attention mechanism, which we coin Nonlocal Attention Operator (NAO)
NAO can address ill-posedness and rank deficiency in inverse PDE problems by encoding regularization and achieving generalizability.
arXiv Detail & Related papers (2024-08-14T05:57:56Z) - DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - Diffusion models as probabilistic neural operators for recovering unobserved states of dynamical systems [49.2319247825857]
We show that diffusion-based generative models exhibit many properties favourable for neural operators.
We propose to train a single model adaptable to multiple tasks, by alternating between the tasks during training.
arXiv Detail & Related papers (2024-05-11T21:23:55Z) - Physics informed WNO [0.0]
We propose a physics-informed Wavelet Operator (WNO) for learning the solution operators of families of parametric partial differential equations (PDEs) without labeled training data.
The efficacy of the framework is validated and illustrated with four nonlinear neural systems relevant to various fields of engineering and science.
arXiv Detail & Related papers (2023-02-12T14:31:50Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - One-shot learning for solution operators of partial differential equations [3.559034814756831]
Learning and solving governing equations of a physical system, represented by partial differential equations (PDEs), from data is a central challenge in a variety of areas of science and engineering.
Traditional numerical methods for solving PDEs can be computationally expensive for complex systems and require the complete PDEs of the physical system.
Here, we propose the first solution operator learning method that only requires one PDE solution, i.e., one-shot learning.
arXiv Detail & Related papers (2021-04-06T17:35:10Z) - Learning to Control PDEs with Differentiable Physics [102.36050646250871]
We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames.
We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs.
arXiv Detail & Related papers (2020-01-21T11:58:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.