SyReNets: Symbolic Residual Neural Networks
- URL: http://arxiv.org/abs/2105.14396v1
- Date: Sun, 30 May 2021 00:30:27 GMT
- Title: SyReNets: Symbolic Residual Neural Networks
- Authors: Carlos Magno C. O. Valle, Sami Haddadin
- Abstract summary: We propose SyReNets, an approach that leverages neural networks for learning symbolic relations to accurately describe dynamic physical systems from data.
We do this by only observing random samples of position, velocity, and acceleration as input and torque as output.
The approach is evaluated using a simulated controlled double pendulum and compared with neural networks, genetic programming, and traditional system identification.
- Score: 9.713727879151012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite successful seminal works on passive systems in the literature,
learning free-form physical laws for controlled dynamical systems given
experimental data is still an open problem. For decades, symbolic mathematical
equations and system identification were the golden standards. Unfortunately, a
set of assumptions about the properties of the underlying system is required,
which makes the model very rigid and unable to adapt to unforeseen changes in
the physical system. Neural networks, on the other hand, are known universal
function approximators but are prone to over-fit, limited accuracy, and bias
problems, which makes them alone unreliable candidates for such tasks. In this
paper, we propose SyReNets, an approach that leverages neural networks for
learning symbolic relations to accurately describe dynamic physical systems
from data. It explores a sequence of symbolic layers that build, in a residual
manner, mathematical relations that describes a given desired output from input
variables. We apply it to learn the symbolic equation that describes the
Lagrangian of a given physical system. We do this by only observing random
samples of position, velocity, and acceleration as input and torque as output.
Therefore, using the Lagrangian as a latent representation from which we derive
torque using the Euler-Lagrange equations. The approach is evaluated using a
simulated controlled double pendulum and compared with neural networks, genetic
programming, and traditional system identification. The results demonstrate
that, compared to neural networks and genetic programming, SyReNets converges
to representations that are more accurate and precise throughout the state
space. Despite having slower convergence than traditional system
identification, similar to neural networks, the approach remains flexible
enough to adapt to an unforeseen change in the physical system structure.
Related papers
- Learning Governing Equations of Unobserved States in Dynamical Systems [0.0]
We employ a hybrid neural ODE structure to learn governing equations of partially-observed dynamical systems.
We demonstrate that the method is capable of successfully learning the true underlying governing equations of unobserved states within these systems.
arXiv Detail & Related papers (2024-04-29T10:28:14Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - AI-Lorenz: A physics-data-driven framework for black-box and gray-box
identification of chaotic systems with symbolic regression [2.07180164747172]
We develop a framework that learns mathematical expressions modeling complex dynamical behaviors.
We train a small neural network to learn the dynamics of a system, its rate of change in time, and missing model terms.
This, in turn, enables us to predict the future evolution of the dynamical behavior.
arXiv Detail & Related papers (2023-12-21T18:58:41Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Artificial neural network as a universal model of nonlinear dynamical
systems [0.0]
The map is built as an artificial neural network whose weights encode a modeled system.
We consider the Lorenz system, the Roessler system and also Hindmarch-Rose neuron.
High similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunovs exponents.
arXiv Detail & Related papers (2021-03-06T16:02:41Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z) - System Identification Through Lipschitz Regularized Deep Neural Networks [0.4297070083645048]
We use neural networks to learn governing equations from data.
We reconstruct the right-hand side of a system of ODEs $dotx(t) = f(t, x(t))$ directly from observed uniformly time-sampled data.
arXiv Detail & Related papers (2020-09-07T17:52:51Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Mean-Field and Kinetic Descriptions of Neural Differential Equations [0.0]
In this work we focus on a particular class of neural networks, i.e. the residual neural networks.
We analyze steady states and sensitivity with respect to the parameters of the network, namely the weights and the bias.
A modification of the microscopic dynamics, inspired by residual neural networks, leads to a Fokker-Planck formulation of the network.
arXiv Detail & Related papers (2020-01-07T13:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.