Transfer learning to decode brain states reflecting the relationship
between cognitive tasks
- URL: http://arxiv.org/abs/2206.03950v1
- Date: Tue, 7 Jun 2022 09:39:47 GMT
- Title: Transfer learning to decode brain states reflecting the relationship
between cognitive tasks
- Authors: Youzhi Qu, Xinyao Jian, Wenxin Che, Penghui Du, Kai Fu, Quanying Liu
- Abstract summary: We propose a transfer learning framework to reflect the relationship between cognitive tasks.
We compare the task relations reflected by transfer learning and by the overlaps of brain regions.
Our results create cognitive taskonomy to reflect the relationship between cognitive tasks.
- Score: 1.3701366534590498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning improves the performance of the target task by leveraging
the data of a specific source task: the closer the relationship between the
source and the target tasks, the greater the performance improvement by
transfer learning. In neuroscience, the relationship between cognitive tasks is
usually represented by similarity of activated brain regions or neural
representation. However, no study has linked transfer learning and neuroscience
to reveal the relationship between cognitive tasks. In this study, we propose a
transfer learning framework to reflect the relationship between cognitive
tasks, and compare the task relations reflected by transfer learning and by the
overlaps of brain regions (e.g., neurosynth). Our results of transfer learning
create cognitive taskonomy to reflect the relationship between cognitive tasks
which is well in line with the task relations derived from neurosynth. Transfer
learning performs better in task decoding with fMRI data if the source and
target cognitive tasks activate similar brain regions. Our study uncovers the
relationship of multiple cognitive tasks and provides guidance for source task
selection in transfer learning for neural decoding based on small-sample data.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Auto Detecting Cognitive Events Using Machine Learning on Pupillary Data [0.0]
Pupil size is a valuable indicator of cognitive workload, reflecting changes in attention and arousal governed by the autonomic nervous system.
This study explores the potential of using machine learning to automatically detect cognitive events experienced using individuals.
arXiv Detail & Related papers (2024-10-18T04:54:46Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Uncovering cognitive taskonomy through transfer learning in masked autoencoder-based fMRI reconstruction [6.3348067441225915]
We employ the masked autoencoder (MAE) model to reconstruct functional magnetic resonance imaging (fMRI) data.
Our study suggests that the fMRI reconstruction with MAE model can uncover the latent representation.
arXiv Detail & Related papers (2024-05-24T09:29:16Z) - Exploring a Cognitive Architecture for Learning Arithmetic Equations [0.0]
This paper explores the cognitive mechanisms powering arithmetic learning.
I implement a number vectorization embedding network and an associative memory model to investigate how an intelligent system can learn and recall arithmetic equations.
I aim to contribute to ongoing research into the neural correlates of mathematical cognition in intelligent systems.
arXiv Detail & Related papers (2024-05-05T18:42:00Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - Multi-Task Neural Processes [105.22406384964144]
We develop multi-task neural processes, a new variant of neural processes for multi-task learning.
In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task.
Results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning.
arXiv Detail & Related papers (2021-11-10T17:27:46Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.