A Hybrid Neural Autoencoder for Sensory Neuroprostheses and Its
Applications in Bionic Vision
- URL: http://arxiv.org/abs/2205.13623v1
- Date: Thu, 26 May 2022 20:52:00 GMT
- Title: A Hybrid Neural Autoencoder for Sensory Neuroprostheses and Its
Applications in Bionic Vision
- Authors: Jacob Granley, Lucas Relic, Michael Beyeler
- Abstract summary: Sensory neuroprostheses are emerging as a promising technology to restore lost sensory function or augment human capacities.
In this paper we show how a deep neural network encoder is trained to invert a known, fixed forward model that approximates the underlying biological system.
As a proof of concept, we demonstrate the effectiveness of our hybrid neural autoencoder (HNA) on the use case of visual neuroprostheses.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sensory neuroprostheses are emerging as a promising technology to restore
lost sensory function or augment human capacities. However, sensations elicited
by current devices often appear artificial and distorted. Although current
models can often predict the neural or perceptual response to an electrical
stimulus, an optimal stimulation strategy solves the inverse problem: what is
the required stimulus to produce a desired response? Here we frame this as an
end-to-end optimization problem, where a deep neural network encoder is trained
to invert a known, fixed forward model that approximates the underlying
biological system. As a proof of concept, we demonstrate the effectiveness of
our hybrid neural autoencoder (HNA) on the use case of visual neuroprostheses.
We found that HNA is able to produce high-fidelity stimuli from the MNIST and
COCO datasets that outperform conventional encoding strategies and surrogate
techniques across all tested conditions. Overall this is an important step
towards the long-standing challenge of restoring high-quality vision to people
living with incurable blindness and may prove a promising solution for a
variety of neuroprosthetic technologies.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Hybrid Spiking Neural Networks for Low-Power Intra-Cortical Brain-Machine Interfaces [42.72938925647165]
Intra-cortical brain-machine interfaces (iBMIs) have the potential to dramatically improve the lives of people with paraplegia.
Current iBMIs suffer from scalability and mobility limitations due to bulky hardware and wiring.
We are investigating hybrid spiking neural networks for embedded neural decoding in wireless iBMIs.
arXiv Detail & Related papers (2024-09-06T17:48:44Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Neuro-BERT: Rethinking Masked Autoencoding for Self-supervised Neurological Pretraining [24.641328814546842]
We present Neuro-BERT, a self-supervised pre-training framework of neurological signals based on masked autoencoding in the Fourier domain.
We propose a novel pre-training task dubbed Fourier Inversion Prediction (FIP), which randomly masks out a portion of the input signal and then predicts the missing information.
By evaluating our method on several benchmark datasets, we show that Neuro-BERT improves downstream neurological-related tasks by a large margin.
arXiv Detail & Related papers (2022-04-20T16:48:18Z) - Training Deep Spiking Auto-encoders without Bursting or Dying Neurons
through Regularization [9.34612743192798]
Spiking neural networks are a promising approach towards next-generation models of the brain in computational neuroscience.
We apply end-to-end learning with membrane potential-based backpropagation to a spiking convolutional auto-encoder.
We show that applying regularization on membrane potential and spiking output successfully avoids both dead and bursting neurons.
arXiv Detail & Related papers (2021-09-22T21:27:40Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Graph Convolutional Networks Reveal Neural Connections Encoding
Prosthetic Sensation [1.4431534196506413]
Machine learning strategies that optimize stimulation parameters as the subject learns to interpret the artificial input could improve device efficacy.
Recent advances extending deep learning techniques to non-Euclidean graph data provide a novel approach to interpreting neuronal spiking activity.
We apply graph convolutional networks (GCNs) to infer the underlying functional relationship between neurons that are involved in the processing of artificial sensory information.
arXiv Detail & Related papers (2020-08-23T01:43:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.