Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses
- URL: http://arxiv.org/abs/2209.13561v1
- Date: Tue, 27 Sep 2022 17:33:19 GMT
- Title: Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses
- Authors: Jacob Granley, Alexander Riedel, Michael Beyeler
- Abstract summary: Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
- Score: 68.96380145211093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cortical prostheses are devices implanted in the visual cortex that attempt
to restore lost vision by electrically stimulating neurons. Currently, the
vision provided by these devices is limited, and accurately predicting the
visual percepts resulting from stimulation is an open challenge. We propose to
address this challenge by utilizing 'brain-like' convolutional neural networks
(CNNs), which have emerged as promising models of the visual system. To
investigate the feasibility of adapting brain-like CNNs for modeling visual
prostheses, we developed a proof-of-concept model to predict the perceptions
resulting from electrical stimulation. We show that a neurologically-inspired
decoding of CNN activations produces qualitatively accurate phosphenes,
comparable to phosphenes reported by real patients. Overall, this is an
essential first step towards building brain-like models of electrical
stimulation, which may not just improve the quality of vision provided by
cortical prostheses but could also further our understanding of the neural code
of vision.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Unidirectional brain-computer interface: Artificial neural network
encoding natural images to fMRI response in the visual cortex [12.1427193917406]
We propose an artificial neural network dubbed VISION to mimic the human brain and show how it can foster neuroscientific inquiries.
VISION successfully predicts human hemodynamic responses as fMRI voxel values to visual inputs with an accuracy exceeding state-of-the-art performance by 45%.
arXiv Detail & Related papers (2023-09-26T15:38:26Z) - Bio-Inspired Simple Neural Network for Low-Light Image Restoration: A
Minimalist Approach [8.75682288556859]
In this study, we explore the potential of using a straightforward neural network inspired by the retina model to efficiently restore low-light images.
Our proposed neural network model reduces the computational overhead compared to traditional signal-processing models.
arXiv Detail & Related papers (2023-05-03T01:16:45Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Adversarially trained neural representations may already be as robust as
corresponding biological neural representations [66.73634912993006]
We develop a method for performing adversarial visual attacks directly on primate brain activity.
We report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.
arXiv Detail & Related papers (2022-06-19T04:15:29Z) - A Hybrid Neural Autoencoder for Sensory Neuroprostheses and Its
Applications in Bionic Vision [0.0]
Sensory neuroprostheses are emerging as a promising technology to restore lost sensory function or augment human capacities.
In this paper we show how a deep neural network encoder is trained to invert a known, fixed forward model that approximates the underlying biological system.
As a proof of concept, we demonstrate the effectiveness of our hybrid neural autoencoder (HNA) on the use case of visual neuroprostheses.
arXiv Detail & Related papers (2022-05-26T20:52:00Z) - Deep Learning-Based Perceptual Stimulus Encoder for Bionic Vision [6.1739856715198]
We propose a PSE that is trained in an end-to-end fashion to predict the electrode activation patterns required to produce a desired visual percept.
We demonstrate the effectiveness of the encoder on MNIST using a psychophysically validated phosphene model tailored to individual retinal implant users.
arXiv Detail & Related papers (2022-03-10T19:42:09Z) - NeuroGen: activation optimized image synthesis for discovery
neuroscience [9.621977197691747]
We propose a novel computational strategy, which we call NeuroGen, to overcome limitations and develop a powerful tool for human vision neuroscience discovery.
NeuroGen combines an fMRI-trained neural encoding model of human vision with a deep generative network to synthesize images predicted to achieve a target pattern of macro-scale brain activation.
By using only a small number of synthetic images created by NeuroGen, we demonstrate that we can detect and amplify differences in regional and individual human brain response patterns to visual stimuli.
arXiv Detail & Related papers (2021-05-15T04:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.