Decoding Neural Activity to Assess Individual Latent State in
Ecologically Valid Contexts
- URL: http://arxiv.org/abs/2304.09050v1
- Date: Tue, 18 Apr 2023 15:15:00 GMT
- Title: Decoding Neural Activity to Assess Individual Latent State in
Ecologically Valid Contexts
- Authors: Stephen M. Gordon, Jonathan R. McDaniel, Kevin W. King, Vernon J.
Lawhern, Jonathan Touryan
- Abstract summary: We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models.
We derive estimates of the underlying latent state and associated patterns of neural activity.
- Score: 1.1059590443280727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There exist very few ways to isolate cognitive processes, historically
defined via highly controlled laboratory studies, in more ecologically valid
contexts. Specifically, it remains unclear as to what extent patterns of neural
activity observed under such constraints actually manifest outside the
laboratory in a manner that can be used to make an accurate inference about the
latent state, associated cognitive process, or proximal behavior of the
individual. Improving our understanding of when and how specific patterns of
neural activity manifest in ecologically valid scenarios would provide
validation for laboratory-based approaches that study similar neural phenomena
in isolation and meaningful insight into the latent states that occur during
complex tasks. We argue that domain generalization methods from the
brain-computer interface community have the potential to address this
challenge. We previously used such an approach to decode phasic neural
responses associated with visual target discrimination. Here, we extend that
work to more tonic phenomena such as internal latent states. We use data from
two highly controlled laboratory paradigms to train two separate
domain-generalized models. We apply the trained models to an ecologically valid
paradigm in which participants performed multiple, concurrent driving-related
tasks. Using the pretrained models, we derive estimates of the underlying
latent state and associated patterns of neural activity. Importantly, as the
patterns of neural activity change along the axis defined by the original
training data, we find changes in behavior and task performance consistent with
the observations from the original, laboratory paradigms. We argue that these
results lend ecological validity to those experimental designs and provide a
methodology for understanding the relationship between observed neural activity
and behavior during complex tasks.
Related papers
- Meta-Dynamical State Space Models for Integrative Neural Data Analysis [8.625491800829224]
Learning shared structure across environments facilitates rapid learning and adaptive behavior in neural systems.
There has been limited work exploiting the shared structure in neural activity during similar tasks for learning latent dynamics from neural recordings.
We propose a novel approach for meta-learning this solution space from task-related neural activity of trained animals.
arXiv Detail & Related papers (2024-10-07T19:35:49Z) - BLEND: Behavior-guided Neural Population Dynamics Modeling via Privileged Knowledge Distillation [6.3559178227943764]
We propose BLEND, a behavior-guided neural population dynamics modeling framework via privileged knowledge distillation.
By considering behavior as privileged information, we train a teacher model that takes both behavior observations (privileged features) and neural activities (regular features) as inputs.
A student model is then distilled using only neural activity.
arXiv Detail & Related papers (2024-10-02T12:45:59Z) - Latent Variable Sequence Identification for Cognitive Models with Neural Bayes Estimation [7.7227297059345466]
We present an approach that extends neural Bayes estimation to learn a direct mapping between experimental data and the targeted latent variable space.
Our work underscores that combining recurrent neural networks and simulation-based inference to identify latent variable sequences can enable researchers to access a wider class of cognitive models.
arXiv Detail & Related papers (2024-06-20T21:13:39Z) - Deep Latent Variable Modeling of Physiological Signals [0.8702432681310401]
We explore high-dimensional problems related to physiological monitoring using latent variable models.
First, we present a novel deep state-space model to generate electrical waveforms of the heart using optically obtained signals as inputs.
Second, we present a brain signal modeling scheme that combines the strengths of probabilistic graphical models and deep adversarial learning.
Third, we propose a framework for the joint modeling of physiological measures and behavior.
arXiv Detail & Related papers (2024-05-29T17:07:33Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Approximating Attributed Incentive Salience In Large Scale Scenarios. A
Representation Learning Approach Based on Artificial Neural Networks [5.065947993017158]
We propose a methodology based on artificial neural networks (ANNs) for approximating latent states produced by incentive salience attribution.
We designed an ANN for estimating duration and intensity of future interactions between individuals and a series of video games in a large-scale longitudinal dataset.
arXiv Detail & Related papers (2021-08-03T20:03:21Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.