Graph Convolutional Networks Reveal Neural Connections Encoding
Prosthetic Sensation
- URL: http://arxiv.org/abs/2009.03272v1
- Date: Sun, 23 Aug 2020 01:43:46 GMT
- Title: Graph Convolutional Networks Reveal Neural Connections Encoding
Prosthetic Sensation
- Authors: Vivek Subramanian, Joshua Khani
- Abstract summary: Machine learning strategies that optimize stimulation parameters as the subject learns to interpret the artificial input could improve device efficacy.
Recent advances extending deep learning techniques to non-Euclidean graph data provide a novel approach to interpreting neuronal spiking activity.
We apply graph convolutional networks (GCNs) to infer the underlying functional relationship between neurons that are involved in the processing of artificial sensory information.
- Score: 1.4431534196506413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting stimulus features from neuronal ensembles is of great interest to
the development of neuroprosthetics that project sensory information directly
to the brain via electrical stimulation. Machine learning strategies that
optimize stimulation parameters as the subject learns to interpret the
artificial input could improve device efficacy, increase prosthetic
performance, ensure stability of evoked sensations, and improve power
consumption by eliminating extraneous input. Recent advances extending deep
learning techniques to non-Euclidean graph data provide a novel approach to
interpreting neuronal spiking activity. For this study, we apply graph
convolutional networks (GCNs) to infer the underlying functional relationship
between neurons that are involved in the processing of artificial sensory
information. Data was collected from a freely behaving rat using a four
infrared (IR) sensor, ICMS-based neuroprosthesis to localize IR light sources.
We use GCNs to predict the stimulation frequency across four stimulating
channels in the prosthesis, which encode relative distance and directional
information to an IR-emitting reward port. Our GCN model is able to achieve a
peak performance of 73.5% on a modified ordinal regression performance metric
in a multiclass classification problem consisting of 7 classes, where chance is
14.3%. Additionally, the inferred adjacency matrix provides a adequate
representation of the underlying neural circuitry encoding the artificial
sensation.
Related papers
- Hybrid Spiking Neural Networks for Low-Power Intra-Cortical Brain-Machine Interfaces [42.72938925647165]
Intra-cortical brain-machine interfaces (iBMIs) have the potential to dramatically improve the lives of people with paraplegia.
Current iBMIs suffer from scalability and mobility limitations due to bulky hardware and wiring.
We are investigating hybrid spiking neural networks for embedded neural decoding in wireless iBMIs.
arXiv Detail & Related papers (2024-09-06T17:48:44Z) - Growing Deep Neural Network Considering with Similarity between Neurons [4.32776344138537]
We explore a novel approach of progressively increasing neuron numbers in compact models during training phases.
We propose a method that reduces feature extraction biases and neuronal redundancy by introducing constraints based on neuron similarity distributions.
Results on CIFAR-10 and CIFAR-100 datasets demonstrated accuracy improvement.
arXiv Detail & Related papers (2024-08-23T11:16:37Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Learn to integrate parts for whole through correlated neural variability [8.173681663544757]
Sensory perception originates from the responses of sensory neurons, which react to a collection of sensory signals linked to physical attributes of a singular perceptual object.
Unraveling how the brain extracts perceptual information from these neuronal responses is a pivotal challenge in both computational neuroscience and machine learning.
We introduce a statistical mechanical theory, where perceptual information is first encoded in the correlated variability of sensory neurons and then reformatted into the firing rates of downstream neurons.
arXiv Detail & Related papers (2024-01-01T13:05:29Z) - Fast gradient-free activation maximization for neurons in spiking neural networks [5.805438104063613]
We present a framework with an efficient design for such a loop.
We track changes in the optimal stimuli for artificial neurons during training.
This formation of refined optimal stimuli is associated with an increase in classification accuracy.
arXiv Detail & Related papers (2023-12-28T18:30:13Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - A Hybrid Neural Autoencoder for Sensory Neuroprostheses and Its
Applications in Bionic Vision [0.0]
Sensory neuroprostheses are emerging as a promising technology to restore lost sensory function or augment human capacities.
In this paper we show how a deep neural network encoder is trained to invert a known, fixed forward model that approximates the underlying biological system.
As a proof of concept, we demonstrate the effectiveness of our hybrid neural autoencoder (HNA) on the use case of visual neuroprostheses.
arXiv Detail & Related papers (2022-05-26T20:52:00Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.