Neuronal architecture extracts statistical temporal patterns
- URL: http://arxiv.org/abs/2301.10203v1
- Date: Tue, 24 Jan 2023 18:21:33 GMT
- Title: Neuronal architecture extracts statistical temporal patterns
- Authors: Sandra Nestler, Moritz Helias and Matthieu Gilson
- Abstract summary: We show how higher-order temporal (co-)fluctuations can be employed to represent and process information.
A simple biologically inspired feedforward neuronal model is able to extract information from up to the third order cumulant to perform time series classification.
- Score: 1.9662978733004601
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuronal systems need to process temporal signals. We here show how
higher-order temporal (co-)fluctuations can be employed to represent and
process information. Concretely, we demonstrate that a simple biologically
inspired feedforward neuronal model is able to extract information from up to
the third order cumulant to perform time series classification. This model
relies on a weighted linear summation of synaptic inputs followed by a
nonlinear gain function. Training both - the synaptic weights and the nonlinear
gain function - exposes how the non-linearity allows for the transfer of higher
order correlations to the mean, which in turn enables the synergistic use of
information encoded in multiple cumulants to maximize the classification
accuracy. The approach is demonstrated both on a synthetic and on real world
datasets of multivariate time series. Moreover, we show that the biologically
inspired architecture makes better use of the number of trainable parameters as
compared to a classical machine-learning scheme. Our findings emphasize the
benefit of biological neuronal architectures, paired with dedicated learning
algorithms, for the processing of information embedded in higher-order
statistical cumulants of temporal (co-)fluctuations.
Related papers
- Deep Learning for real-time neural decoding of grasp [0.0]
We present a Deep Learning-based approach to the decoding of neural signals for grasp type classification.
The main goal of the presented approach is to improve over state-of-the-art decoding accuracy without relying on any prior neuroscience knowledge.
arXiv Detail & Related papers (2023-11-02T08:26:29Z) - Neural Koopman prior for data assimilation [7.875955593012905]
We use a neural network architecture to embed dynamical systems in latent spaces.
We introduce methods that enable to train such a model for long-term continuous reconstruction.
The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques.
arXiv Detail & Related papers (2023-09-11T09:04:36Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Impact of spiking neurons leakages and network recurrences on
event-based spatio-temporal pattern recognition [0.0]
Spiking neural networks coupled with neuromorphic hardware and event-based sensors are getting increased interest for low-latency and low-power inference at the edge.
We explore the impact of synaptic and membrane leakages in spiking neurons.
arXiv Detail & Related papers (2022-11-14T21:34:02Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Dynamic Adaptive Spatio-temporal Graph Convolution for fMRI Modelling [0.0]
We propose a dynamic adaptivetemporal graph convolution (DASTGCN) model to overcome the shortcomings of pre-defined static correlation-based graph structures.
The proposed approach allows end-to-end inference of dynamic connections between brain regions via layer-wise graph structure learning module.
We evaluate our pipeline on the UKBiobank for age and gender classification tasks from resting-state functional scans.
arXiv Detail & Related papers (2021-09-26T07:19:47Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Efficient Inference of Flexible Interaction in Spiking-neuron Networks [41.83710212492543]
We use the nonlinear Hawkes process to model excitatory or inhibitory interactions among neurons.
We show our algorithm can estimate the temporal dynamics of interaction and reveal the interpretable functional connectivity underlying neural spike trains.
arXiv Detail & Related papers (2020-06-23T09:10:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.