Parsimonious neural networks learn interpretable physical laws
- URL: http://arxiv.org/abs/2005.11144v3
- Date: Wed, 16 Dec 2020 21:36:40 GMT
- Title: Parsimonious neural networks learn interpretable physical laws
- Authors: Saaketh Desai, Alejandro Strachan
- Abstract summary: We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony.
The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning is playing an increasing role in the physical sciences and
significant progress has been made towards embedding domain knowledge into
models. Less explored is its use to discover interpretable physical laws from
data. We propose parsimonious neural networks (PNNs) that combine neural
networks with evolutionary optimization to find models that balance accuracy
with parsimony. The power and versatility of the approach is demonstrated by
developing models for classical mechanics and to predict the melting
temperature of materials from fundamental properties. In the first example, the
resulting PNNs are easily interpretable as Newton's second law, expressed as a
non-trivial time integrator that exhibits time-reversibility and conserves
energy, where the parsimony is critical to extract underlying symmetries from
the data. In the second case, the PNNs not only find the celebrated Lindemann
melting law, but also new relationships that outperform it in the pareto sense
of parsimony vs. accuracy.
Related papers
- Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning [91.29876772547348]
Spiking neural networks (SNNs) are investigated as biologically inspired models of neural computation.
This paper reveals that SNNs, when amalgamated with synaptic delay and temporal coding, are proficient in executing (knowledge) graph reasoning.
arXiv Detail & Related papers (2024-05-27T05:53:30Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Extreme sparsification of physics-augmented neural networks for
interpretable model discovery in mechanics [0.0]
We propose to train regularized physics-augmented neural network-based models utilizing a smoothed version of $L0$-regularization.
We show that the method can reliably obtain interpretable and trustworthy models for compressible and incompressible thermodynamicity, yield functions, and hardening models for elastoplasticity.
arXiv Detail & Related papers (2023-10-05T16:28:58Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Neural Additive Models for Location Scale and Shape: A Framework for
Interpretable Neural Regression Beyond the Mean [1.0923877073891446]
Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks.
Despite this success, the inner workings of DNNs are often not transparent.
This lack of interpretability has led to increased research on inherently interpretable neural networks.
arXiv Detail & Related papers (2023-01-27T17:06:13Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Interpretable Quantum Advantage in Neural Sequence Learning [2.575030923243061]
We study the relative expressive power between a broad class of neural network sequence models and a class of recurrent models based on Gaussian operations with non-Gaussian measurements.
We pinpoint quantum contextuality as the source of an unconditional memory separation in the expressivity of the two model classes.
In doing so, we demonstrate that our introduced quantum models are able to outperform state of the art classical models even in practice.
arXiv Detail & Related papers (2022-09-28T18:34:04Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Physics-enhanced Neural Networks in the Small Data Regime [0.0]
We show that by considering the actual energy level as a regularization term during training, the results can be further improved.
Especially in the case where only small amounts of data are available, these improvements can significantly enhance the predictive capability.
arXiv Detail & Related papers (2021-11-19T17:21:14Z) - A deep learning framework for solution and discovery in solid mechanics [1.4699455652461721]
We present the application of a class of deep learning, known as Physics Informed Neural Networks (PINN), to learning and discovery in solid mechanics.
We explain how to incorporate the momentum balance and elasticity relations into PINN, and explore in detail the application to linear elasticity.
arXiv Detail & Related papers (2020-02-14T08:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.