Deep Cross-Subject Mapping of Neural Activity
- URL: http://arxiv.org/abs/2007.06407v3
- Date: Tue, 22 Feb 2022 03:17:34 GMT
- Title: Deep Cross-Subject Mapping of Neural Activity
- Authors: Marko Angjelichinoski, Bijan Pesaran and Vahid Tarokh
- Abstract summary: We show that a neural decoder trained on neural activity signals of one subject can be used to robustly decode the motor intentions of a different subject.
The findings reported in this paper are an important step towards the development of cross-subject brain-computer.
- Score: 33.25686697879346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective. In this paper, we consider the problem of cross-subject decoding,
where neural activity data collected from the prefrontal cortex of a given
subject (destination) is used to decode motor intentions from the neural
activity of a different subject (source). Approach. We cast the problem of
neural activity mapping in a probabilistic framework where we adopt deep
generative modelling. Our proposed algorithm uses deep conditional variational
autoencoder to infer the representation of the neural activity of the source
subject into an adequate feature space of the destination subject where neural
decoding takes place. Results. We verify our approach on an experimental data
set in which two macaque monkeys perform memory-guided visual saccades to one
of eight target locations. The results show a peak cross-subject decoding
improvement of $8\%$ over subject-specific decoding. Conclusion. We demonstrate
that a neural decoder trained on neural activity signals of one subject can be
used to robustly decode the motor intentions of a different subject with high
reliability. This is achieved in spite of the non-stationary nature of neural
activity signals and the subject-specific variations of the recording
conditions. Significance. The findings reported in this paper are an important
step towards the development of cross-subject brain-computer that generalize
well across a population.
Related papers
- Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models [2.600709013150986]
Understanding the neural basis of behavior is a fundamental goal in neuroscience.
Our approach, named BeNeDiff'', first identifies a fine-grained and disentangled neural subspace.
It then employs state-of-the-art generative diffusion models to synthesize behavior videos that interpret the neural dynamics of each latent factor.
arXiv Detail & Related papers (2024-10-12T18:28:56Z) - Towards a "universal translator" for neural dynamics at single-cell, single-spike resolution [10.49121904052395]
We build towards a first foundation model for neural spiking data that can solve a diverse set of tasks across multiple brain areas.
Prediction tasks include single-neuron and region-level activity prediction, forward prediction, and behavior decoding.
arXiv Detail & Related papers (2024-07-19T21:05:28Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Deep Learning for real-time neural decoding of grasp [0.0]
We present a Deep Learning-based approach to the decoding of neural signals for grasp type classification.
The main goal of the presented approach is to improve over state-of-the-art decoding accuracy without relying on any prior neuroscience knowledge.
arXiv Detail & Related papers (2023-11-02T08:26:29Z) - Automated Natural Language Explanation of Deep Visual Neurons with Large
Models [43.178568768100305]
This paper proposes a novel post-hoc framework for generating semantic explanations of neurons with large foundation models.
Our framework is designed to be compatible with various model architectures and datasets, automated and scalable neuron interpretation.
arXiv Detail & Related papers (2023-10-16T17:04:51Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.