INO: Invariant Neural Operators for Learning Complex Physical Systems
with Momentum Conservation
- URL: http://arxiv.org/abs/2212.14365v1
- Date: Thu, 29 Dec 2022 16:40:41 GMT
- Title: INO: Invariant Neural Operators for Learning Complex Physical Systems
with Momentum Conservation
- Authors: Ning Liu, Yue Yu, Huaiqian You, Neeraj Tatikola
- Abstract summary: We introduce a novel integral neural operator architecture, to learn physical models with fundamental conservation laws automatically guaranteed.
As applications, we demonstrate the expressivity and efficacy of our model in learning complex material behaviors from both synthetic and experimental datasets.
- Score: 8.218875461185016
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural operators, which emerge as implicit solution operators of hidden
governing equations, have recently become popular tools for learning responses
of complex real-world physical systems. Nevertheless, the majority of neural
operator applications has thus far been data-driven, which neglects the
intrinsic preservation of fundamental physical laws in data. In this paper, we
introduce a novel integral neural operator architecture, to learn physical
models with fundamental conservation laws automatically guaranteed. In
particular, by replacing the frame-dependent position information with its
invariant counterpart in the kernel space, the proposed neural operator is by
design translation- and rotation-invariant, and consequently abides by the
conservation laws of linear and angular momentums. As applications, we
demonstrate the expressivity and efficacy of our model in learning complex
material behaviors from both synthetic and experimental datasets, and show
that, by automatically satisfying these essential physical laws, our learned
neural operator is not only generalizable in handling translated and rotated
datasets, but also achieves state-of-the-art accuracy and efficiency as
compared to baseline neural operator models.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Nonlocal Attention Operator: Materializing Hidden Knowledge Towards Interpretable Physics Discovery [25.75410883895742]
We propose a novel neural operator architecture based on the attention mechanism, which we coin Nonlocal Attention Operator (NAO)
NAO can address ill-posedness and rank deficiency in inverse PDE problems by encoding regularization and achieving generalizability.
arXiv Detail & Related papers (2024-08-14T05:57:56Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Generalization Error Guaranteed Auto-Encoder-Based Nonlinear Model
Reduction for Operator Learning [12.124206935054389]
In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet)
Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations.
Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process.
arXiv Detail & Related papers (2024-01-19T05:01:43Z) - Peridynamic Neural Operators: A Data-Driven Nonlocal Constitutive Model
for Complex Material Responses [12.454290779121383]
We introduce a novel integral neural operator architecture called the Peridynamic Neural Operator (PNO) that learns a nonlocal law from data.
This neural operator provides a forward model in the form of state-based peridynamics, with objectivity and momentum balance laws automatically guaranteed.
We show that, owing to its ability to capture complex responses, our learned neural operator achieves improved accuracy and efficiency compared to baseline models.
arXiv Detail & Related papers (2024-01-11T17:37:20Z) - Harnessing the Power of Neural Operators with Automatically Encoded Conservation Laws [14.210553163356131]
We introduce conservation law-encoded neural operators (clawNOs)
ClawNOs are compliant with the most fundamental and ubiquitous conservation laws essential for correct physical consistency.
They significantly outperform the state-of-the-art NOs in learning efficacy, especially in small-data regimes.
arXiv Detail & Related papers (2023-12-18T13:21:49Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Equivariant Graph Mechanics Networks with Constraints [83.38709956935095]
We propose Graph Mechanics Network (GMN) which is efficient, equivariant and constraint-aware.
GMN represents, by generalized coordinates, the forward kinematics information (positions and velocities) of a structural object.
Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency.
arXiv Detail & Related papers (2022-03-12T14:22:14Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - On the application of Physically-Guided Neural Networks with Internal
Variables to Continuum Problems [0.0]
We present Physically-Guided Neural Networks with Internal Variables (PGNNIV)
universal physical laws are used as constraints in the neural network, in such a way that some neuron values can be interpreted as internal state variables of the system.
This endows the network with unraveling capacity, as well as better predictive properties such as faster convergence, fewer data needs and additional noise filtering.
We extend this new methodology to continuum physical problems, showing again its predictive and explanatory capacities when only using measurable values in the training set.
arXiv Detail & Related papers (2020-11-23T13:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.