Peridynamic Neural Operators: A Data-Driven Nonlocal Constitutive Model
for Complex Material Responses
- URL: http://arxiv.org/abs/2401.06070v1
- Date: Thu, 11 Jan 2024 17:37:20 GMT
- Title: Peridynamic Neural Operators: A Data-Driven Nonlocal Constitutive Model
for Complex Material Responses
- Authors: Siavash Jafarzadeh, Stewart Silling, Ning Liu, Zhongqiang Zhang, Yue
Yu
- Abstract summary: We introduce a novel integral neural operator architecture called the Peridynamic Neural Operator (PNO) that learns a nonlocal law from data.
This neural operator provides a forward model in the form of state-based peridynamics, with objectivity and momentum balance laws automatically guaranteed.
We show that, owing to its ability to capture complex responses, our learned neural operator achieves improved accuracy and efficiency compared to baseline models.
- Score: 12.454290779121383
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural operators, which can act as implicit solution operators of hidden
governing equations, have recently become popular tools for learning the
responses of complex real-world physical systems. Nevertheless, most neural
operator applications have thus far been data-driven and neglect the intrinsic
preservation of fundamental physical laws in data. In this work, we introduce a
novel integral neural operator architecture called the Peridynamic Neural
Operator (PNO) that learns a nonlocal constitutive law from data. This neural
operator provides a forward model in the form of state-based peridynamics, with
objectivity and momentum balance laws automatically guaranteed. As
applications, we demonstrate the expressivity and efficacy of our model in
learning complex material behaviors from both synthetic and experimental data
sets. We show that, owing to its ability to capture complex responses, our
learned neural operator achieves improved accuracy and efficiency compared to
baseline models that use predefined constitutive laws. Moreover, by preserving
the essential physical laws within the neural network architecture, the PNO is
robust in treating noisy data. The method shows generalizability to different
domain configurations, external loadings, and discretizations.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Extreme sparsification of physics-augmented neural networks for
interpretable model discovery in mechanics [0.0]
We propose to train regularized physics-augmented neural network-based models utilizing a smoothed version of $L0$-regularization.
We show that the method can reliably obtain interpretable and trustworthy models for compressible and incompressible thermodynamicity, yield functions, and hardening models for elastoplasticity.
arXiv Detail & Related papers (2023-10-05T16:28:58Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Learning Latent Dynamics via Invariant Decomposition and
(Spatio-)Temporal Transformers [0.6767885381740952]
We propose a method for learning dynamical systems from high-dimensional empirical data.
We focus on the setting in which data are available from multiple different instances of a system.
We study behaviour through simple theoretical analyses and extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-21T07:52:07Z) - INO: Invariant Neural Operators for Learning Complex Physical Systems
with Momentum Conservation [8.218875461185016]
We introduce a novel integral neural operator architecture, to learn physical models with fundamental conservation laws automatically guaranteed.
As applications, we demonstrate the expressivity and efficacy of our model in learning complex material behaviors from both synthetic and experimental datasets.
arXiv Detail & Related papers (2022-12-29T16:40:41Z) - Learning Deep Implicit Fourier Neural Operators (IFNOs) with
Applications to Heterogeneous Material Modeling [3.9181541460605116]
We propose to use data-driven modeling to predict a material's response without using conventional models.
The material response is modeled by learning the implicit mappings between loading conditions and the resultant displacement and/or damage fields.
We demonstrate the performance of our proposed method for a number of examples, including hyperelastic, anisotropic and brittle materials.
arXiv Detail & Related papers (2022-03-15T19:08:13Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - On the application of Physically-Guided Neural Networks with Internal
Variables to Continuum Problems [0.0]
We present Physically-Guided Neural Networks with Internal Variables (PGNNIV)
universal physical laws are used as constraints in the neural network, in such a way that some neuron values can be interpreted as internal state variables of the system.
This endows the network with unraveling capacity, as well as better predictive properties such as faster convergence, fewer data needs and additional noise filtering.
We extend this new methodology to continuum physical problems, showing again its predictive and explanatory capacities when only using measurable values in the training set.
arXiv Detail & Related papers (2020-11-23T13:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.