CSP-Net: Common Spatial Pattern Empowered Neural Networks for EEG-Based Motor Imagery Classification
- URL: http://arxiv.org/abs/2411.11879v1
- Date: Mon, 04 Nov 2024 13:48:58 GMT
- Title: CSP-Net: Common Spatial Pattern Empowered Neural Networks for EEG-Based Motor Imagery Classification
- Authors: Xue Jiang, Lubin Meng, Xinru Chen, Yifan Xu, Dongrui Wu,
- Abstract summary: This paper proposes two CSP-empowered neural networks (CSP-Nets)
CSP-Nets integrate knowledge-driven CSP filters with data-driven CNNs to enhance the performance in MI classification.
Experiments on four public MI datasets demonstrated that the two CSP-Nets consistently improved over their CNN backbones.
- Score: 23.289676815663523
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electroencephalogram-based motor imagery (MI) classification is an important paradigm of non-invasive brain-computer interfaces. Common spatial pattern (CSP), which exploits different energy distributions on the scalp while performing different MI tasks, is very popular in MI classification. Convolutional neural networks (CNNs) have also achieved great success, due to their powerful learning capabilities. This paper proposes two CSP-empowered neural networks (CSP-Nets), which integrate knowledge-driven CSP filters with data-driven CNNs to enhance the performance in MI classification. CSP-Net-1 directly adds a CSP layer before a CNN to improve the input discriminability. CSP-Net-2 replaces a convolutional layer in CNN with a CSP layer. The CSP layer parameters in both CSP-Nets are initialized with CSP filters designed from the training data. During training, they can either be kept fixed or optimized using gradient descent. Experiments on four public MI datasets demonstrated that the two CSP-Nets consistently improved over their CNN backbones, in both within-subject and cross-subject classifications. They are particularly useful when the number of training samples is very small. Our work demonstrates the advantage of integrating knowledge-driven traditional machine learning with data-driven deep learning in EEG-based brain-computer interfaces.
Related papers
- CNN-Transformer Rectified Collaborative Learning for Medical Image Segmentation [60.08541107831459]
This paper proposes a CNN-Transformer rectified collaborative learning framework to learn stronger CNN-based and Transformer-based models for medical image segmentation.
Specifically, we propose a rectified logit-wise collaborative learning (RLCL) strategy which introduces the ground truth to adaptively select and rectify the wrong regions in student soft labels.
We also propose a class-aware feature-wise collaborative learning (CFCL) strategy to achieve effective knowledge transfer between CNN-based and Transformer-based models in the feature space.
arXiv Detail & Related papers (2024-08-25T01:27:35Z) - Sharpend Cosine Similarity based Neural Network for Hyperspectral Image
Classification [0.456877715768796]
Hyperspectral Image Classification (HSIC) is a difficult task due to high inter and intra-class similarity and variability, nested regions, and overlapping.
2D Convolutional Neural Networks (CNN) emerged as a viable network whereas, 3D CNNs are a better alternative due to accurate classification.
This paper introduces Sharpened Cosine Similarity (SCS) concept as an alternative to convolutions in a Neural Network for HSIC.
arXiv Detail & Related papers (2023-05-26T07:04:00Z) - Visual Recognition with Deep Nearest Centroids [57.35144702563746]
We devise deep nearest centroids (DNC), a conceptually elegant yet surprisingly effective network for large-scale visual recognition.
Compared with parametric counterparts, DNC performs better on image classification (CIFAR-10, ImageNet) and greatly boots pixel recognition (ADE20K, Cityscapes)
arXiv Detail & Related papers (2022-09-15T15:47:31Z) - Deep Image Clustering with Contrastive Learning and Multi-scale Graph
Convolutional Networks [58.868899595936476]
This paper presents a new deep clustering approach termed image clustering with contrastive learning and multi-scale graph convolutional networks (IcicleGCN)
Experiments on multiple image datasets demonstrate the superior clustering performance of IcicleGCN over the state-of-the-art.
arXiv Detail & Related papers (2022-07-14T19:16:56Z) - Learning Enhancement of CNNs via Separation Index Maximizing at the
First Convolutional Layer [1.6244541005112747]
The Separation Index (SI) as a supervised complexity measure is explained its usage in better learning of CNNs for classification problems illustrate.
A learning strategy proposes through which the first layer of a CNN is optimized by maximizing the SI, and the further layers are trained through the backpropagation algorithm to learn further layers.
arXiv Detail & Related papers (2022-01-13T21:32:14Z) - Classification of Motor Imagery EEG Signals by Using a Divergence Based
Convolutional Neural Network [0.0]
It is observed that the augmentation process is not applied for increasing the classification performance of EEG signals.
In this study, we have investigated the effect of the augmentation process on the classification performance of MI EEG signals.
arXiv Detail & Related papers (2021-03-19T18:27:28Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Collaborative Method for Incremental Learning on Classification and
Generation [32.07222897378187]
We introduce a novel algorithm, Incremental Class Learning with Attribute Sharing (ICLAS), for incremental class learning with deep neural networks.
As one of its component, incGAN, can generate images with increased variety compared with the training data.
Under challenging environment of data deficiency, ICLAS incrementally trains classification and the generation networks.
arXiv Detail & Related papers (2020-10-29T06:34:53Z) - A Dual-Dimer Method for Training Physics-Constrained Neural Networks
with Minimax Architecture [6.245537312562826]
The training of physics-constrained neural networks (PCNNs) is searched by a minimax search algorithm (PCNN-MM)
A novel saddle point algorithm called DualDimer is used to search the high-order saddle points of neural network data.
The convergence weights of PCNN-MMs is faster than that of traditional PCNNs.
arXiv Detail & Related papers (2020-05-01T21:26:04Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.