Alignment-Based Adversarial Training (ABAT) for Improving the Robustness and Accuracy of EEG-Based BCIs
- URL: http://arxiv.org/abs/2411.02094v1
- Date: Mon, 04 Nov 2024 13:56:54 GMT
- Title: Alignment-Based Adversarial Training (ABAT) for Improving the Robustness and Accuracy of EEG-Based BCIs
- Authors: Xiaoqing Chen, Ziwei Wang, Dongrui Wu,
- Abstract summary: ABAT performs EEG data alignment before adversarial training.
Data alignment aligns EEG trials from different domains to reduce their distribution discrepancies.
adversarial training further robustifies the classification boundary.
- Score: 20.239554619810935
- License:
- Abstract: Machine learning has achieved great success in electroencephalogram (EEG) based brain-computer interfaces (BCIs). Most existing BCI studies focused on improving the decoding accuracy, with only a few considering the adversarial security. Although many adversarial defense approaches have been proposed in other application domains such as computer vision, previous research showed that their direct extensions to BCIs degrade the classification accuracy on benign samples. This phenomenon greatly affects the applicability of adversarial defense approaches to EEG-based BCIs. To mitigate this problem, we propose alignment-based adversarial training (ABAT), which performs EEG data alignment before adversarial training. Data alignment aligns EEG trials from different domains to reduce their distribution discrepancies, and adversarial training further robustifies the classification boundary. The integration of data alignment and adversarial training can make the trained EEG classifiers simultaneously more accurate and more robust. Experiments on five EEG datasets from two different BCI paradigms (motor imagery classification, and event related potential recognition), three convolutional neural network classifiers (EEGNet, ShallowCNN and DeepCNN) and three different experimental settings (offline within-subject cross-block/-session classification, online cross-session classification, and pre-trained classifiers) demonstrated its effectiveness. It is very intriguing that adversarial attacks, which are usually used to damage BCI systems, can be used in ABAT to simultaneously improve the model accuracy and robustness.
Related papers
- Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Time-Frequency Jointed Imperceptible Adversarial Attack to Brainprint Recognition with Deep Learning Models [7.019918611093632]
We introduce a novel adversarial attack method that jointly attacks time-domain and frequency-domain EEG signals.
Our method achieves state-of-the-art attack performance on three datasets and three deep-learning models.
arXiv Detail & Related papers (2024-03-15T05:06:34Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Latent Alignment with Deep Set EEG Decoders [44.128689862889715]
We introduce the Latent Alignment method that won the Benchmarks for EEG Transfer Learning competition.
We present its formulation as a deep set applied on the set of trials from a given subject.
Our experimental results show that performing statistical distribution alignment at later stages in a deep learning model is beneficial to the classification accuracy.
arXiv Detail & Related papers (2023-11-29T12:40:45Z) - EEG-NeXt: A Modernized ConvNet for The Classification of Cognitive
Activity from EEG [0.0]
One of the main challenges in electroencephalogram (EEG) based brain-computer interface (BCI) systems is learning the subject/session invariant features to classify cognitive activities.
We propose a novel end-to-end machine learning pipeline, EEG-NeXt, which facilitates transfer learning.
arXiv Detail & Related papers (2022-12-08T10:15:52Z) - Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces [28.686844131216287]
Machine learning has achieved great success in electroencephalogram (EEG) based brain-computer interfaces (BCIs)
Recent studies have shown that EEG-based BCIs are vulnerable to adversarial attacks, where small perturbations added to the input can cause misclassification.
This paper, for the first time, explores adversarial detection in EEG-based BCIs.
arXiv Detail & Related papers (2022-11-28T11:05:32Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Common Spatial Generative Adversarial Networks based EEG Data
Augmentation for Cross-Subject Brain-Computer Interface [4.8276709243429]
Cross-subject application of EEG-based brain-computer interface (BCI) has always been limited by large individual difference and complex characteristics that are difficult to perceive.
We propose a cross-subject EEG classification framework with a generative adversarial networks (GANs) based method named common spatial GAN (CS-GAN)
Our framework provides a promising way to deal with the cross-subject problem and promote the practical application of BCI.
arXiv Detail & Related papers (2021-02-08T10:37:03Z) - EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks [68.01125081367428]
Recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks.
This article proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which is implementable in practice and has never been considered before.
arXiv Detail & Related papers (2020-10-30T20:49:42Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.