Aggregating Intrinsic Information to Enhance BCI Performance through
Federated Learning
- URL: http://arxiv.org/abs/2308.11636v1
- Date: Mon, 14 Aug 2023 08:59:44 GMT
- Title: Aggregating Intrinsic Information to Enhance BCI Performance through
Federated Learning
- Authors: Rui Liu, Yuanyuan Chen, Anran Li, Yi Ding, Han Yu, Cuntai Guan
- Abstract summary: Insufficient data is a long-standing challenge for Brain-Computer Interface (BCI) to build a high-performance deep learning model.
We propose a hierarchical personalized Federated Learning EEG decoding framework to surmount this challenge.
- Score: 29.65566062475597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Insufficient data is a long-standing challenge for Brain-Computer Interface
(BCI) to build a high-performance deep learning model. Though numerous research
groups and institutes collect a multitude of EEG datasets for the same BCI
task, sharing EEG data from multiple sites is still challenging due to the
heterogeneity of devices. The significance of this challenge cannot be
overstated, given the critical role of data diversity in fostering model
robustness. However, existing works rarely discuss this issue, predominantly
centering their attention on model training within a single dataset, often in
the context of inter-subject or inter-session settings. In this work, we
propose a hierarchical personalized Federated Learning EEG decoding (FLEEG)
framework to surmount this challenge. This innovative framework heralds a new
learning paradigm for BCI, enabling datasets with disparate data formats to
collaborate in the model training process. Each client is assigned a specific
dataset and trains a hierarchical personalized model to manage diverse data
formats and facilitate information exchange. Meanwhile, the server coordinates
the training procedure to harness knowledge gleaned from all datasets, thus
elevating overall performance. The framework has been evaluated in Motor
Imagery (MI) classification with nine EEG datasets collected by different
devices but implementing the same MI task. Results demonstrate that the
proposed frame can boost classification performance up to 16.7% by enabling
knowledge sharing between multiple datasets, especially for smaller datasets.
Visualization results also indicate that the proposed framework can empower the
local models to put a stable focus on task-related areas, yielding better
performance. To the best of our knowledge, this is the first end-to-end
solution to address this important challenge.
Related papers
- A CLIP-Powered Framework for Robust and Generalizable Data Selection [51.46695086779598]
Real-world datasets often contain redundant and noisy data, imposing a negative impact on training efficiency and model performance.
Data selection has shown promise in identifying the most representative samples from the entire dataset.
We propose a novel CLIP-powered data selection framework that leverages multimodal information for more robust and generalizable sample selection.
arXiv Detail & Related papers (2024-10-15T03:00:58Z) - FissionVAE: Federated Non-IID Image Generation with Latent Space and Decoder Decomposition [9.059664504170287]
Federated learning enables decentralized clients to collaboratively learn a shared model while keeping all the training data local.
We introduce a novel approach, FissionVAE, which decomposes the latent space and constructs decoder branches tailored to individual client groups.
To evaluate our approach, we assemble two composite datasets: the first combines MNIST and FashionMNIST; the second comprises RGB datasets of cartoon and human faces, wild animals, marine vessels, and remote sensing images of Earth.
arXiv Detail & Related papers (2024-08-30T08:22:30Z) - Self-Regulated Data-Free Knowledge Amalgamation for Text Classification [9.169836450935724]
We develop a lightweight student network that can learn from multiple teacher models without accessing their original training data.
To accomplish this, we propose STRATANET, a modeling framework that produces text data tailored to each teacher.
We evaluate our method on three benchmark text classification datasets with varying labels or domains.
arXiv Detail & Related papers (2024-06-16T21:13:30Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Towards Personalized Federated Learning via Heterogeneous Model
Reassembly [84.44268421053043]
pFedHR is a framework that leverages heterogeneous model reassembly to achieve personalized federated learning.
pFedHR dynamically generates diverse personalized models in an automated manner.
arXiv Detail & Related papers (2023-08-16T19:36:01Z) - An Efficient General-Purpose Modular Vision Model via Multi-Task
Heterogeneous Training [79.78201886156513]
We present a model that can perform multiple vision tasks and can be adapted to other downstream tasks efficiently.
Our approach achieves comparable results to single-task state-of-the-art models and demonstrates strong generalization on downstream tasks.
arXiv Detail & Related papers (2023-06-29T17:59:57Z) - Evaluating and Incentivizing Diverse Data Contributions in Collaborative
Learning [89.21177894013225]
For a federated learning model to perform well, it is crucial to have a diverse and representative dataset.
We show that the statistical criterion used to quantify the diversity of the data, as well as the choice of the federated learning algorithm used, has a significant effect on the resulting equilibrium.
We leverage this to design simple optimal federated learning mechanisms that encourage data collectors to contribute data representative of the global population.
arXiv Detail & Related papers (2023-06-08T23:38:25Z) - Motor Imagery Decoding Using Ensemble Curriculum Learning and
Collaborative Training [11.157243900163376]
Multi-subject EEG datasets present several kinds of domain shifts.
These domain shifts impede robust cross-subject generalization.
We propose a two-stage model ensemble architecture built with multiple feature extractors.
We demonstrate that our model ensembling approach combines the powers of curriculum learning and collaborative training.
arXiv Detail & Related papers (2022-11-21T13:45:44Z) - A Multi-Format Transfer Learning Model for Event Argument Extraction via
Variational Information Bottleneck [68.61583160269664]
Event argument extraction (EAE) aims to extract arguments with given roles from texts.
We propose a multi-format transfer learning model with variational information bottleneck.
We conduct extensive experiments on three benchmark datasets, and obtain new state-of-the-art performance on EAE.
arXiv Detail & Related papers (2022-08-27T13:52:01Z) - Aggregation Delayed Federated Learning [20.973999078271483]
Federated learning is a distributed machine learning paradigm where multiple data owners (clients) collaboratively train one machine learning model while keeping data on their own devices.
Studies have found performance reduction with standard federated algorithms, such as FedAvg, on non-IID data.
Many existing works on handling non-IID data adopt the same aggregation framework as FedAvg and focus on improving model updates either on the server side or on clients.
In this work, we tackle this challenge by introducing redistribution rounds that delay the aggregation. We perform experiments on multiple tasks and show that the proposed framework significantly improves the performance on non-IID
arXiv Detail & Related papers (2021-08-17T04:06:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.