Evaluating the structure of cognitive tasks with transfer learning
- URL: http://arxiv.org/abs/2308.02408v1
- Date: Fri, 28 Jul 2023 14:51:09 GMT
- Title: Evaluating the structure of cognitive tasks with transfer learning
- Authors: Bruno Aristimunha, Raphael Y. de Camargo, Walter H. Lopez Pinaya,
Sylvain Chevallier, Alexandre Gramfort, Cedric Rommel
- Abstract summary: This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
- Score: 67.22168759751541
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electroencephalography (EEG) decoding is a challenging task due to the
limited availability of labelled data. While transfer learning is a promising
technique to address this challenge, it assumes that transferable data domains
and task are known, which is not the case in this setting. This study
investigates the transferability of deep learning representations between
different EEG decoding tasks. We conduct extensive experiments using
state-of-the-art decoding models on two recently released EEG datasets, ERP
CORE and M$^3$CV, containing over 140 subjects and 11 distinct cognitive tasks.
We measure the transferability of learned representations by pre-training deep
neural networks on one task and assessing their ability to decode subsequent
tasks. Our experiments demonstrate that, even with linear probing transfer,
significant improvements in decoding performance can be obtained, with gains of
up to 28% compare with the pure supervised approach. Additionally, we discover
evidence that certain decoding paradigms elicit specific and narrow brain
activities, while others benefit from pre-training on a broad range of
representations. By revealing which tasks transfer well and demonstrating the
benefits of transfer learning for EEG decoding, our findings have practical
implications for mitigating data scarcity in this setting. The transfer maps
generated also provide insights into the hierarchical relations between
cognitive tasks, hence enhancing our understanding of how these tasks are
connected from a neuroscientific standpoint.
Related papers
- TVE: Learning Meta-attribution for Transferable Vision Explainer [76.68234965262761]
We introduce a Transferable Vision Explainer (TVE) that can effectively explain various vision models in downstream tasks.
TVE is realized through a pre-training process on large-scale datasets towards learning the meta-attribution.
This meta-attribution leverages the versatility of generic backbone encoders to comprehensively encode the attribution knowledge for the input instance, which enables TVE to seamlessly transfer to explain various downstream tasks.
arXiv Detail & Related papers (2023-12-23T21:49:23Z) - Amplifying Pathological Detection in EEG Signaling Pathways through
Cross-Dataset Transfer Learning [10.212217551908525]
We study the effectiveness of data and model scaling and cross-dataset knowledge transfer in a real-world pathology classification task.
We identify the challenges of possible negative transfer and emphasize the significance of some key components.
Our findings indicate a small and generic model (e.g. ShallowNet) performs well on a single dataset, however, a larger model (e.g. TCN) performs better on transfer and learning from a larger and diverse dataset.
arXiv Detail & Related papers (2023-09-19T20:09:15Z) - EEG-based Cognitive Load Classification using Feature Masked
Autoencoding and Emotion Transfer Learning [13.404503606887715]
We present a new solution for the classification of cognitive load using electroencephalogram (EEG)
We pre-train our model using self-supervised masked autoencoding on emotion-related EEG datasets.
The results of our experiments show that our proposed approach achieves strong results and outperforms conventional single-stage fully supervised learning.
arXiv Detail & Related papers (2023-08-01T02:59:19Z) - Frozen Overparameterization: A Double Descent Perspective on Transfer
Learning of Deep Neural Networks [27.17697714584768]
We study the generalization behavior of transfer learning of deep neural networks (DNNs)
We show that the test error evolution during the target training has a more significant double descent effect when the target training dataset is sufficiently large.
Also, we show that the double descent phenomenon may make a transfer from a less related source task better than a transfer from a more related source task.
arXiv Detail & Related papers (2022-11-20T20:26:23Z) - An Exploration of Data Efficiency in Intra-Dataset Task Transfer for
Dialog Understanding [65.75873687351553]
This study explores the effects of varying quantities of target task training data on sequential transfer learning in the dialog domain.
Unintuitively, our data shows that often target task training data size has minimal effect on how sequential transfer learning performs compared to the same model without transfer learning.
arXiv Detail & Related papers (2022-10-21T04:36:46Z) - Transfer learning to decode brain states reflecting the relationship
between cognitive tasks [1.3701366534590498]
We propose a transfer learning framework to reflect the relationship between cognitive tasks.
We compare the task relations reflected by transfer learning and by the overlaps of brain regions.
Our results create cognitive taskonomy to reflect the relationship between cognitive tasks.
arXiv Detail & Related papers (2022-06-07T09:39:47Z) - Learning Invariant Representations across Domains and Tasks [81.30046935430791]
We propose a novel Task Adaptation Network (TAN) to solve this unsupervised task transfer problem.
In addition to learning transferable features via domain-adversarial training, we propose a novel task semantic adaptor that uses the learning-to-learn strategy to adapt the task semantics.
TAN significantly increases the recall and F1 score by 5.0% and 7.8% compared to recently strong baselines.
arXiv Detail & Related papers (2021-03-03T11:18:43Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z) - DEPARA: Deep Attribution Graph for Deep Knowledge Transferability [91.06106524522237]
We propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs.
In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps with regards to the outputs of the PR-DNN.
The knowledge transferability of two PR-DNNs is measured by the similarity of their corresponding DEPARAs.
arXiv Detail & Related papers (2020-03-17T02:07:50Z) - Inter- and Intra-domain Knowledge Transfer for Related Tasks in Deep
Character Recognition [2.320417845168326]
Pre-training a deep neural network on the ImageNet dataset is a common practice for training deep learning models.
The technique of pre-training on one task and then retraining on a new one is called transfer learning.
In this paper we analyse the effectiveness of using deep transfer learning for character recognition tasks.
arXiv Detail & Related papers (2020-01-02T14:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.