Physical Constraint Embedded Neural Networks for inference and noise
regulation
- URL: http://arxiv.org/abs/2105.09146v1
- Date: Wed, 19 May 2021 14:07:20 GMT
- Title: Physical Constraint Embedded Neural Networks for inference and noise
regulation
- Authors: Gregory Barber, Mulugeta A. Haile, Tzikang Chen
- Abstract summary: We present methods for embedding even--odd symmetries and conservation laws in neural networks.
We demonstrate that it can accurately infer symmetries without prior knowledge.
We highlight the noise resilient properties of physical constraint embedded neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks often require large amounts of data to generalize and can be
ill-suited for modeling small and noisy experimental datasets. Standard network
architectures trained on scarce and noisy data will return predictions that
violate the underlying physics. In this paper, we present methods for embedding
even--odd symmetries and conservation laws in neural networks and propose novel
extensions and use cases for physical constraint embedded neural networks. We
design an even--odd decomposition architecture for disentangling a neural
network parameterized function into its even and odd components and demonstrate
that it can accurately infer symmetries without prior knowledge. We highlight
the noise resilient properties of physical constraint embedded neural networks
and demonstrate their utility as physics-informed noise regulators. Here we
employed a conservation of energy constraint embedded network as a
physics-informed noise regulator for a symbolic regression task. We showed that
our approach returns a symbolic representation of the neural network
parameterized function that aligns well with the underlying physics while
outperforming a baseline symbolic regression approach.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Can physical information aid the generalization ability of Neural
Networks for hydraulic modeling? [0.0]
Application of Neural Networks to river hydraulics is fledgling, despite the field suffering from data scarcity.
We propose to mitigate such problem by introducing physical information into the training phase.
We show that incorporating such soft physical information can improve predictive capabilities.
arXiv Detail & Related papers (2024-03-13T14:51:16Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Explainable artificial intelligence for mechanics: physics-informing
neural networks for constitutive models [0.0]
In mechanics, the new and active field of physics-informed neural networks attempts to mitigate this disadvantage by designing deep neural networks on the basis of mechanical knowledge.
We propose a first step towards a physics-forming-in approach, which explains neural networks trained on mechanical data a posteriori.
Therein, the principal component analysis decorrelates the distributed representations in cell states of RNNs and allows the comparison to known and fundamental functions.
arXiv Detail & Related papers (2021-04-20T18:38:52Z) - Learning the ground state of a non-stoquastic quantum Hamiltonian in a
rugged neural network landscape [0.0]
We investigate a class of universal variational wave-functions based on artificial neural networks.
In particular, we show that in the present setup the neural network expressivity and Monte Carlo sampling are not primary limiting factors.
arXiv Detail & Related papers (2020-11-23T05:25:47Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Understanding and mitigating gradient pathologies in physics-informed
neural networks [2.1485350418225244]
This work focuses on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data.
We present a learning rate annealing algorithm that utilizes gradient statistics during model training to balance the interplay between different terms in composite loss functions.
We also propose a novel neural network architecture that is more resilient to such gradient pathologies.
arXiv Detail & Related papers (2020-01-13T21:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.