Investigating naturalistic hand movements by behavior mining in
long-term video and neural recordings
- URL: http://arxiv.org/abs/2001.08349v2
- Date: Fri, 19 Jun 2020 22:52:49 GMT
- Title: Investigating naturalistic hand movements by behavior mining in
long-term video and neural recordings
- Authors: Satpreet H. Singh, Steven M. Peterson, Rajesh P. N. Rao, Bingni W.
Brunton
- Abstract summary: We describe an automated approach for analyzing simultaneously recorded long-term, naturalistic electrocorticography (ECoG) and naturalistic behavior video data.
We show results from our approach applied to data collected for 12 human subjects over 7--9 days for each subject.
Our pipeline discovers and annotates over 40,000 instances of naturalistic human upper-limb movement events in the behavioral videos.
- Score: 1.7205106391379024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent technological advances in brain recording and artificial intelligence
are propelling a new paradigm in neuroscience beyond the traditional controlled
experiment. Rather than focusing on cued, repeated trials, naturalistic
neuroscience studies neural processes underlying spontaneous behaviors
performed in unconstrained settings. However, analyzing such unstructured data
lacking a priori experimental design remains a significant challenge,
especially when the data is multi-modal and long-term. Here we describe an
automated approach for analyzing simultaneously recorded long-term,
naturalistic electrocorticography (ECoG) and naturalistic behavior video data.
We take a behavior-first approach to analyzing the long-term recordings. Using
a combination of computer vision, discrete latent-variable modeling, and string
pattern-matching on the behavioral video data, we find and annotate spontaneous
human upper-limb movement events. We show results from our approach applied to
data collected for 12 human subjects over 7--9 days for each subject. Our
pipeline discovers and annotates over 40,000 instances of naturalistic human
upper-limb movement events in the behavioral videos. Analysis of the
simultaneously recorded brain data reveals neural signatures of movement that
corroborate prior findings from traditional controlled experiments. We also
prototype a decoder for a movement initiation detection task to demonstrate the
efficacy of our pipeline as a source of training data for brain-computer
interfacing applications. Our work addresses the unique data analysis
challenges in studying naturalistic human behaviors, and contributes methods
that may generalize to other neural recording modalities beyond ECoG. We
publicly release our curated dataset, providing a resource to study
naturalistic neural and behavioral variability at a scale not previously
available.
Related papers
- Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models [2.600709013150986]
Understanding the neural basis of behavior is a fundamental goal in neuroscience.
Our approach, named BeNeDiff'', first identifies a fine-grained and disentangled neural subspace.
It then employs state-of-the-art generative diffusion models to synthesize behavior videos that interpret the neural dynamics of each latent factor.
arXiv Detail & Related papers (2024-10-12T18:28:56Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Robust alignment of cross-session recordings of neural population
activity by behaviour via unsupervised domain adaptation [1.2617078020344619]
We introduce a model capable of inferring behaviourally relevant latent dynamics from previously unseen data recorded from the same animal.
We show that unsupervised domain adaptation combined with a sequential variational autoencoder, trained on several sessions, can achieve good generalisation to unseen data.
arXiv Detail & Related papers (2022-02-12T22:17:30Z) - DriPP: Driven Point Processes to Model Stimuli Induced Patterns in M/EEG
Signals [62.997667081978825]
We develop a novel statistical point process model-called driven temporal point processes (DriPP)
We derive a fast and principled expectation-maximization (EM) algorithm to estimate the parameters of this model.
Results on standard MEG datasets demonstrate that our methodology reveals event-related neural responses.
arXiv Detail & Related papers (2021-12-08T13:07:21Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.