On the application of Physically-Guided Neural Networks with Internal
Variables to Continuum Problems
- URL: http://arxiv.org/abs/2011.11376v1
- Date: Mon, 23 Nov 2020 13:06:52 GMT
- Title: On the application of Physically-Guided Neural Networks with Internal
Variables to Continuum Problems
- Authors: Jacobo Ayensa-Jim\'enez, Mohamed H. Doweidar, Jose A. Sanz-Herrera,
Manuel Doblar\'e
- Abstract summary: We present Physically-Guided Neural Networks with Internal Variables (PGNNIV)
universal physical laws are used as constraints in the neural network, in such a way that some neuron values can be interpreted as internal state variables of the system.
This endows the network with unraveling capacity, as well as better predictive properties such as faster convergence, fewer data needs and additional noise filtering.
We extend this new methodology to continuum physical problems, showing again its predictive and explanatory capacities when only using measurable values in the training set.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictive Physics has been historically based upon the development of
mathematical models that describe the evolution of a system under certain
external stimuli and constraints. The structure of such mathematical models
relies on a set of hysical hypotheses that are assumed to be fulfilled by the
system within a certain range of environmental conditions. A new perspective is
now raising that uses physical knowledge to inform the data prediction
capability of artificial neural networks. A particular extension of this
data-driven approach is Physically-Guided Neural Networks with Internal
Variables (PGNNIV): universal physical laws are used as constraints in the
neural network, in such a way that some neuron values can be interpreted as
internal state variables of the system. This endows the network with unraveling
capacity, as well as better predictive properties such as faster convergence,
fewer data needs and additional noise filtering. Besides, only observable data
are used to train the network, and the internal state equations may be
extracted as a result of the training processes, so there is no need to make
explicit the particular structure of the internal state model. We extend this
new methodology to continuum physical problems, showing again its predictive
and explanatory capacities when only using measurable values in the training
set. We show that the mathematical operators developed for image analysis in
deep learning approaches can be used and extended to consider standard
functional operators in continuum Physics, thus establishing a common framework
for both. The methodology presented demonstrates its ability to discover the
internal constitutive state equation for some problems, including heterogeneous
and nonlinear features, while maintaining its predictive ability for the whole
dataset coverage, with the cost of a single evaluation.
Related papers
- Nonlinear classification of neural manifolds with contextual information [6.292933471495322]
manifold capacity has emerged as a promising framework linking population geometry to the separability of neural manifold.
We propose a theoretical framework that overcomes this limitation by leveraging contextual input information.
Our framework's increased expressivity captures representation untanglement in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis.
arXiv Detail & Related papers (2024-05-10T23:37:31Z) - Peridynamic Neural Operators: A Data-Driven Nonlocal Constitutive Model
for Complex Material Responses [12.454290779121383]
We introduce a novel integral neural operator architecture called the Peridynamic Neural Operator (PNO) that learns a nonlocal law from data.
This neural operator provides a forward model in the form of state-based peridynamics, with objectivity and momentum balance laws automatically guaranteed.
We show that, owing to its ability to capture complex responses, our learned neural operator achieves improved accuracy and efficiency compared to baseline models.
arXiv Detail & Related papers (2024-01-11T17:37:20Z) - Predicting and explaining nonlinear material response using deep
Physically Guided Neural Networks with Internal Variables [0.0]
We use the concept of Physically Guided Neural Networks with Internal Variables (PGNNIV) to discover laws.
PGNNIVs make a particular use of the physics of the problem to enforce constraints on specific hidden layers.
We demonstrate that PGNNIVs are capable of predicting both internal and external variables under unseen load scenarios.
arXiv Detail & Related papers (2023-08-07T21:20:24Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Learn Like The Pro: Norms from Theory to Size Neural Computation [3.848947060636351]
We investigate how dynamical systems with nonlinearities can inform the design of neural systems that seek to emulate them.
We propose a Learnability metric and quantify its associated features to the near-equilibrium behavior of learning dynamics.
It reveals exact sizing for a class of neural networks with multiplicative nodes that mimic continuous- or discrete-time dynamics.
arXiv Detail & Related papers (2021-06-21T20:58:27Z) - Identification of state functions by physically-guided neural networks
with physically-meaningful internal layers [0.0]
We use the concept of physically-constrained neural networks (PCNN) to predict the input-output relation in a physical system.
We show that this approach, besides getting physically-based predictions, accelerates the training process.
arXiv Detail & Related papers (2020-11-17T11:26:37Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.