Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses
- URL: http://arxiv.org/abs/2209.13561v1
- Date: Tue, 27 Sep 2022 17:33:19 GMT
- Title: Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses
- Authors: Jacob Granley, Alexander Riedel, Michael Beyeler
- Abstract summary: Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
- Score: 68.96380145211093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cortical prostheses are devices implanted in the visual cortex that attempt
to restore lost vision by electrically stimulating neurons. Currently, the
vision provided by these devices is limited, and accurately predicting the
visual percepts resulting from stimulation is an open challenge. We propose to
address this challenge by utilizing 'brain-like' convolutional neural networks
(CNNs), which have emerged as promising models of the visual system. To
investigate the feasibility of adapting brain-like CNNs for modeling visual
prostheses, we developed a proof-of-concept model to predict the perceptions
resulting from electrical stimulation. We show that a neurologically-inspired
decoding of CNN activations produces qualitatively accurate phosphenes,
comparable to phosphenes reported by real patients. Overall, this is an
essential first step towards building brain-like models of electrical
stimulation, which may not just improve the quality of vision provided by
cortical prostheses but could also further our understanding of the neural code
of vision.
Related papers
- Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Unidirectional brain-computer interface: Artificial neural network
encoding natural images to fMRI response in the visual cortex [12.1427193917406]
We propose an artificial neural network dubbed VISION to mimic the human brain and show how it can foster neuroscientific inquiries.
VISION successfully predicts human hemodynamic responses as fMRI voxel values to visual inputs with an accuracy exceeding state-of-the-art performance by 45%.
arXiv Detail & Related papers (2023-09-26T15:38:26Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Adversarially trained neural representations may already be as robust as
corresponding biological neural representations [66.73634912993006]
We develop a method for performing adversarial visual attacks directly on primate brain activity.
We report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.
arXiv Detail & Related papers (2022-06-19T04:15:29Z) - Human Eyes Inspired Recurrent Neural Networks are More Robust Against
Adversarial Noises [3.8738982761490988]
Compared to human vision, computer vision based on convolutional neural networks (CNN) are more vulnerable to adversarial noises.
This difference is likely attributable to how the eyes sample visual input and how the brain processes retinal samples through its dorsal and ventral visual pathways.
We design recurrent neural networks, including an input sampler that mimics the human retina, a dorsal network that guides where to look next, and a ventral network that represents the retinal samples.
Taking these modules together, the models learn to take multiple glances at an image, attend to a salient part at each glance, and accumulate the representation over time to recognize the image.
arXiv Detail & Related papers (2022-06-15T03:44:42Z) - A Hybrid Neural Autoencoder for Sensory Neuroprostheses and Its
Applications in Bionic Vision [0.0]
Sensory neuroprostheses are emerging as a promising technology to restore lost sensory function or augment human capacities.
In this paper we show how a deep neural network encoder is trained to invert a known, fixed forward model that approximates the underlying biological system.
As a proof of concept, we demonstrate the effectiveness of our hybrid neural autoencoder (HNA) on the use case of visual neuroprostheses.
arXiv Detail & Related papers (2022-05-26T20:52:00Z) - Deep Learning-Based Perceptual Stimulus Encoder for Bionic Vision [6.1739856715198]
We propose a PSE that is trained in an end-to-end fashion to predict the electrode activation patterns required to produce a desired visual percept.
We demonstrate the effectiveness of the encoder on MNIST using a psychophysically validated phosphene model tailored to individual retinal implant users.
arXiv Detail & Related papers (2022-03-10T19:42:09Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - NeuroGen: activation optimized image synthesis for discovery
neuroscience [9.621977197691747]
We propose a novel computational strategy, which we call NeuroGen, to overcome limitations and develop a powerful tool for human vision neuroscience discovery.
NeuroGen combines an fMRI-trained neural encoding model of human vision with a deep generative network to synthesize images predicted to achieve a target pattern of macro-scale brain activation.
By using only a small number of synthetic images created by NeuroGen, we demonstrate that we can detect and amplify differences in regional and individual human brain response patterns to visual stimuli.
arXiv Detail & Related papers (2021-05-15T04:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.