Deep Learning for real-time neural decoding of grasp
- URL: http://arxiv.org/abs/2311.01061v1
- Date: Thu, 2 Nov 2023 08:26:29 GMT
- Title: Deep Learning for real-time neural decoding of grasp
- Authors: Paolo Viviani and Ilaria Gesmundo and Elios Ghinato and Andres
Agudelo-Toro and Chiara Vercellino and Giacomo Vitali and Letizia Bergamasco
and Alberto Scionti and Marco Ghislieri and Valentina Agostini and Olivier
Terzo and Hansj\"org Scherberger
- Abstract summary: We present a Deep Learning-based approach to the decoding of neural signals for grasp type classification.
The main goal of the presented approach is to improve over state-of-the-art decoding accuracy without relying on any prior neuroscience knowledge.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural decoding involves correlating signals acquired from the brain to
variables in the physical world like limb movement or robot control in Brain
Machine Interfaces. In this context, this work starts from a specific
pre-existing dataset of neural recordings from monkey motor cortex and presents
a Deep Learning-based approach to the decoding of neural signals for grasp type
classification. Specifically, we propose here an approach that exploits LSTM
networks to classify time series containing neural data (i.e., spike trains)
into classes representing the object being grasped. The main goal of the
presented approach is to improve over state-of-the-art decoding accuracy
without relying on any prior neuroscience knowledge, and leveraging only the
capability of deep learning models to extract correlations from data. The paper
presents the results achieved for the considered dataset and compares them with
previous works on the same dataset, showing a significant improvement in
classification accuracy, even if considering simulated real-time decoding.
Related papers
- The Brain's Bitter Lesson: Scaling Speech Decoding With Self-Supervised Learning [3.649801602551928]
We develop a set of neuroscience-inspired self-supervised objectives, together with a neural architecture, for representation learning from heterogeneous recordings.
Results show that representations learned with these objectives scale with data, generalise across subjects, datasets, and tasks, and surpass comparable self-supervised approaches.
arXiv Detail & Related papers (2024-06-06T17:59:09Z) - Bayesian Time-Series Classifier for Decoding Simple Visual Stimuli from
Intracranial Neural Activity [0.0]
We propose a straightforward Bayesian time series classifier (BTsC) model that tackles challenges whilst maintaining a high level of interpretability.
We demonstrate the classification capabilities of this approach by utilizing neural data to decode colors in a visual task.
The proposed solution can be applied to neural data recorded in various tasks, where there is a need for interpretable results.
arXiv Detail & Related papers (2023-07-28T17:04:06Z) - Predictive Coding: Towards a Future of Deep Learning beyond
Backpropagation? [41.58529335439799]
The backpropagation of error algorithm used to train deep neural networks has been fundamental to the successes of deep learning.
Recent work has developed the idea into a general-purpose algorithm able to train neural networks using only local computations.
We show the substantially greater flexibility of predictive coding networks against equivalent deep neural networks.
arXiv Detail & Related papers (2022-02-18T22:57:03Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Deep Cross-Subject Mapping of Neural Activity [33.25686697879346]
We show that a neural decoder trained on neural activity signals of one subject can be used to robustly decode the motor intentions of a different subject.
The findings reported in this paper are an important step towards the development of cross-subject brain-computer.
arXiv Detail & Related papers (2020-07-13T14:35:02Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.