Recurrent Spectral Network (RSN): shaping the basin of attraction of a
discrete map to reach automated classification
- URL: http://arxiv.org/abs/2202.04497v1
- Date: Wed, 9 Feb 2022 14:59:06 GMT
- Title: Recurrent Spectral Network (RSN): shaping the basin of attraction of a
discrete map to reach automated classification
- Authors: Lorenzo Chicchi, Duccio Fanelli, Lorenzo Giambagli, Lorenzo Buffoni,
Timoteo Carletti
- Abstract summary: A novel strategy to automated classification is introduced which exploits a fully trained dynamical system to steer items toward distinct attractors.
Non-linear terms act for a transient and allow to disentangle the data supplied as initial condition to the discrete dynamical system.
Our novel approach to classification, that we here term Recurrent Spectral Network (RSN), is successfully challenged against a simple test-bed model, created for illustrative purposes, as well as a standard dataset for image processing training.
- Score: 4.724825031148412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel strategy to automated classification is introduced which exploits a
fully trained dynamical system to steer items belonging to different categories
toward distinct asymptotic attractors. These latter are incorporated into the
model by taking advantage of the spectral decomposition of the operator that
rules the linear evolution across the processing network. Non-linear terms act
for a transient and allow to disentangle the data supplied as initial condition
to the discrete dynamical system, shaping the boundaries of different
attractors. The network can be equipped with several memory kernels which can
be sequentially activated for serial datasets handling. Our novel approach to
classification, that we here term Recurrent Spectral Network (RSN), is
successfully challenged against a simple test-bed model, created for
illustrative purposes, as well as a standard dataset for image processing
training.
Related papers
- Complex Recurrent Spectral Network [1.0499611180329806]
This paper presents a novel approach to advancing artificial intelligence (AI) through the development of the Complex Recurrent Spectral Network ($mathbbC$-RSN)
The $mathbbC$-RSN is designed to address a critical limitation in existing neural network models: their inability to emulate the complex processes of biological neural networks.
arXiv Detail & Related papers (2023-12-12T14:14:40Z) - Stable Attractors for Neural networks classification via Ordinary Differential Equations (SA-nODE) [0.9786690381850358]
A priori is constructed to accommodate a set of pre-assigned stationary stable attractors.
The inherent ability to perform classification is reflected in the shaped basin of attractions associated to each of the target stable attractors.
Although this method does not reach the performance of state-of-the-art deep learning algorithms, it illustrates that continuous dynamical systems with closed analytical interaction terms can serve as high-performance classifiers.
arXiv Detail & Related papers (2023-11-17T08:30:41Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Stacked Residuals of Dynamic Layers for Time Series Anomaly Detection [0.0]
We present an end-to-end differentiable neural network architecture to perform anomaly detection in multivariate time series.
The architecture is a cascade of dynamical systems designed to separate linearly predictable components of the signal.
The anomaly detector exploits the temporal structure of the prediction residuals to detect both isolated point anomalies and set-point changes.
arXiv Detail & Related papers (2022-02-25T01:50:22Z) - Convolutional Dynamic Alignment Networks for Interpretable
Classifications [108.83345790813445]
We introduce a new family of neural network models called Convolutional Dynamic Alignment Networks (CoDA-Nets)
Their core building blocks are Dynamic Alignment Units (DAUs), which linearly transform their input with weight vectors that dynamically align with task-relevant patterns.
CoDA-Nets model the classification prediction through a series of input-dependent linear transformations, allowing for linear decomposition of the output into individual input contributions.
arXiv Detail & Related papers (2021-03-31T18:03:53Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Large-scale spatiotemporal photonic reservoir computer for image
classification [0.8701566919381222]
We propose a scalable photonic architecture for implementation of feedforward and recurrent neural networks to perform the classification of handwritten digits.
Our experiment exploits off-the-shelf optical and electronic components to currently achieve a network size of 16,384 nodes.
arXiv Detail & Related papers (2020-04-06T10:22:31Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z) - Meta-learning framework with applications to zero-shot time-series
forecasting [82.61728230984099]
This work provides positive evidence using a broad meta-learning framework.
residual connections act as a meta-learning adaptation mechanism.
We show that it is viable to train a neural network on a source TS dataset and deploy it on a different target TS dataset without retraining.
arXiv Detail & Related papers (2020-02-07T16:39:43Z) - A Multi-Scale Tensor Network Architecture for Classification and
Regression [0.0]
We present an algorithm for supervised learning using tensor networks.
We employ a step of preprocessing the data by coarse-graining through a sequence of wavelet transformations.
We show how fine-graining through the network may be used to initialize models with access to finer-scale features.
arXiv Detail & Related papers (2020-01-22T21:26:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.