Leveraging Recurrent Neural Networks for Predicting Motor Movements from Primate Motor Cortex Neural Recordings
- URL: http://arxiv.org/abs/2410.22283v2
- Date: Fri, 01 Nov 2024 15:00:44 GMT
- Title: Leveraging Recurrent Neural Networks for Predicting Motor Movements from Primate Motor Cortex Neural Recordings
- Authors: Yuanxi Wang, Zuowen Wang, Shih-Chii Liu,
- Abstract summary: This paper presents an efficient solution for decoding motor movements from neural recordings in non-human primates.
An Autoencoder Gated Recurrent Unit (AEGRU) model was adopted as the model architecture for this task.
- Score: 8.365349007799296
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents an efficient deep learning solution for decoding motor movements from neural recordings in non-human primates. An Autoencoder Gated Recurrent Unit (AEGRU) model was adopted as the model architecture for this task. The autoencoder is only used during the training stage to achieve better generalization. Together with the preprocessing techniques, our model achieved 0.71 $R^2$ score, surpassing the baseline models in Neurobench and is ranked first for $R^2$ in the IEEE BioCAS 2024 Grand Challenge on Neural Decoding. Model pruning is also applied leading to a reduction of 41.4% of the multiply-accumulate (MAC) operations with little change in the $R^2$ score compared to the unpruned model.
Related papers
- Online decoding of rat self-paced locomotion speed from EEG using recurrent neural networks [41.99844472131922]
Accurate neural decoding of locomotion holds promise for advancing rehabilitation, prosthetic control, and understanding neural correlates of action.<n>Here, we aim to decode self-paced locomotion speed non-invasively and continuously using cortex-wide EEG recordings from rats.
arXiv Detail & Related papers (2026-02-20T22:12:11Z) - BiND: A Neural Discriminator-Decoder for Accurate Bimanual Trajectory Prediction in Brain-Computer Interfaces [2.5725730509014353]
BiND (Bimanual Neural Discriminator-Decoder) is a two-stage model that first classifies motion type and then uses specialized GRU-based decoders.<n>We benchmark BiND against six state-of-the-art models on a publicly available 13-session intracortical dataset from a tetraplegic patient.
arXiv Detail & Related papers (2025-08-19T10:18:41Z) - Realtime-Capable Hybrid Spiking Neural Networks for Neural Decoding of Cortical Activity [42.72938925647165]
Intra-cortical brain-machine interfaces (iBMIs) present a promising solution to restoring and decoding brain activity lost due to injury.<n>Patients with such neuroprosthetics suffer from permanent skull openings resulting from the devices' bulky wiring.<n>Most recently, spiking neural networks (SNNs) have been researched as potential candidates for low-power neural decoding.
arXiv Detail & Related papers (2025-06-16T12:08:08Z) - EvSegSNN: Neuromorphic Semantic Segmentation for Event Data [0.6138671548064356]
EvSegSNN is a biologically plausible encoder-decoder U-shaped architecture relying on Parametric Leaky Integrate and Fire neurons.
We introduce an end-to-end biologically inspired semantic segmentation approach by combining Spiking Neural Networks with event cameras.
Experiments conducted on DDD17 demonstrate that EvSegSNN outperforms the closest state-of-the-art model in terms of MIoU.
arXiv Detail & Related papers (2024-06-20T10:36:24Z) - Latent Variable Double Gaussian Process Model for Decoding Complex Neural Data [0.0]
Non-parametric models, such as Gaussian Processes (GP), show promising results in the analysis of complex data.
We introduce a novel neural decoder model built upon GP models.
We demonstrate an application of this decoder model in a verbal memory experiment dataset.
arXiv Detail & Related papers (2024-05-08T20:49:34Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Improving Dual-Encoder Training through Dynamic Indexes for Negative
Mining [61.09807522366773]
We introduce an algorithm that approximates the softmax with provable bounds and that dynamically maintains the tree.
In our study on datasets with over twenty million targets, our approach cuts error by half in relation to oracle brute-force negative mining.
arXiv Detail & Related papers (2023-03-27T15:18:32Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Reducing the Computational Cost of Deep Generative Models with Binary
Neural Networks [25.084146613277973]
We show for the first time that we can successfully train generative models which utilize binary neural networks.
This reduces the computational cost of the models massively.
We demonstrate that two state-of-the-art deep generative models, the ResNet VAE and Flow++ models, can be binarized effectively using these techniques.
arXiv Detail & Related papers (2020-10-26T10:43:28Z) - AutoPruning for Deep Neural Network with Dynamic Channel Masking [28.018077874687343]
We propose a learning based auto pruning algorithm for deep neural network.
A two objectives' problem that aims for the the weights and the best channels for each layer is first formulated.
An alternative optimization approach is then proposed to derive the optimal channel numbers and weights simultaneously.
arXiv Detail & Related papers (2020-10-22T20:12:46Z) - From Boltzmann Machines to Neural Networks and Back Again [31.613544605376624]
We give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models.
Our results are based on new connections to learning two-layer neural networks under $ell_infty$ bounded input.
We then give an algorithm for learning a natural class of supervised RBMs with better runtime than what is possible for its related class of networks without distributional assumptions.
arXiv Detail & Related papers (2020-07-25T00:42:50Z) - REST: Robust and Efficient Neural Networks for Sleep Monitoring in the
Wild [62.36144064259933]
We propose REST, a new method that simultaneously tackles both issues via adversarial training and controlling the Lipschitz constant of the neural network.
We demonstrate that REST produces highly-robust and efficient models that substantially outperform the original full-sized models in the presence of noise.
By deploying these models to an Android application on a smartphone, we quantitatively observe that REST allows models to achieve up to 17x energy reduction and 9x faster inference.
arXiv Detail & Related papers (2020-01-29T17:23:16Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.