How neural networks learn to classify chaotic time series
- URL: http://arxiv.org/abs/2306.02300v1
- Date: Sun, 4 Jun 2023 08:53:27 GMT
- Title: How neural networks learn to classify chaotic time series
- Authors: Alessandro Corbetta, Thomas Geert de Jong
- Abstract summary: We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are increasingly employed to model, analyze and control
non-linear dynamical systems ranging from physics to biology. Owing to their
universal approximation capabilities, they regularly outperform
state-of-the-art model-driven methods in terms of accuracy, computational
speed, and/or control capabilities. On the other hand, neural networks are very
often they are taken as black boxes whose explainability is challenged, among
others, by huge amounts of trainable parameters. In this paper, we tackle the
outstanding issue of analyzing the inner workings of neural networks trained to
classify regular-versus-chaotic time series. This setting, well-studied in
dynamical systems, enables thorough formal analyses. We focus specifically on a
family of networks dubbed Large Kernel Convolutional Neural Networks (LKCNN),
recently introduced by Boull\'{e} et al. (2021). These non-recursive networks
have been shown to outperform other established architectures (e.g. residual
networks, shallow neural networks and fully convolutional networks) at this
classification task. Furthermore, they outperform ``manual'' classification
approaches based on direct reconstruction of the Lyapunov exponent. We find
that LKCNNs use qualitative properties of the input sequence. In particular, we
show that the relation between input periodicity and activation periodicity is
key for the performance of LKCNN models. Low performing models show, in fact,
analogous periodic activations to random untrained models. This could give very
general criteria for identifying, a priori, trained models that have poor
accuracy.
Related papers
- Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation [6.233189707488025]
In this article, we analyze the dynamical, computational, and learning properties of adaptive LIF neurons and networks thereof.
We show that the superiority of networks of adaptive LIF neurons extends to the prediction and generation of complex time series.
arXiv Detail & Related papers (2024-08-14T12:49:58Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Dynamic Analysis of Nonlinear Civil Engineering Structures using
Artificial Neural Network with Adaptive Training [2.1202971527014287]
In this study, artificial neural networks are developed with adaptive training algorithms.
The networks can successfully predict the time-history response of the shear frame and the rock structure to real ground motion records.
arXiv Detail & Related papers (2021-11-21T21:14:48Z) - Structure and Performance of Fully Connected Neural Networks: Emerging
Complex Network Properties [0.8484871864277639]
Complex Network (CN) techniques are proposed to analyze the structure and performance of fully connected neural networks.
We build a dataset with 4 thousand models and their respective CN properties.
Our findings suggest that CN properties play a critical role in the performance of fully connected neural networks.
arXiv Detail & Related papers (2021-07-29T14:53:52Z) - The Surprising Simplicity of the Early-Time Learning Dynamics of Neural
Networks [43.860358308049044]
In work, we show that these common perceptions can be completely false in the early phase of learning.
We argue that this surprising simplicity can persist in networks with more layers with convolutional architecture.
arXiv Detail & Related papers (2020-06-25T17:42:49Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Deep Randomized Neural Networks [12.333836441649343]
Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed.
This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks.
arXiv Detail & Related papers (2020-02-27T17:57:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.