End-to-end learnable EEG channel selection with deep neural networks
- URL: http://arxiv.org/abs/2102.09050v2
- Date: Fri, 19 Feb 2021 10:24:56 GMT
- Title: End-to-end learnable EEG channel selection with deep neural networks
- Authors: Thomas Strypsteen and Alexander Bertrand
- Abstract summary: We propose a framework to embed the EEG channel selection in the neural network itself.
We deal with the discrete nature of this new optimization problem by employing continuous relaxations of the discrete channel selection parameters.
This generic approach is evaluated on two different EEG tasks.
- Score: 72.21556656008156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many electroencephalography (EEG) applications rely on channel selection
methods to remove the least informative channels, e.g., to reduce the amount of
electrodes to be mounted, to decrease the computational load, or to reduce
overfitting effects and improve performance. Wrapper-based channel selection
methods aim to match the channel selection step to the target model, yet they
require to re-train the model multiple times on different candidate channel
subsets, which often leads to an unacceptably high computational cost,
especially when said model is a (deep) neural network. To alleviate this, we
propose a framework to embed the EEG channel selection in the neural network
itself to jointly learn the network weights and optimal channels in an
end-to-end manner by traditional backpropagation algorithms. We deal with the
discrete nature of this new optimization problem by employing continuous
relaxations of the discrete channel selection parameters based on the
Gumbel-softmax trick. We also propose a regularization method that discourages
selecting channels more than once. This generic approach is evaluated on two
different EEG tasks: motor imagery brain-computer interfaces and auditory
attention decoding. The results demonstrate that our framework is generally
applicable, while being competitive with state-of-the art EEG channel selection
methods, tailored to these tasks.
Related papers
- Spiking Neural Network Decision Feedback Equalization [70.3497683558609]
We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE)
We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels.
The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
arXiv Detail & Related papers (2022-11-09T09:19:15Z) - Interference Cancellation GAN Framework for Dynamic Channels [74.22393885274728]
We introduce an online training framework that can adapt to any changes in the channel.
Our framework significantly outperforms recent neural network models on highly dynamic channels.
arXiv Detail & Related papers (2022-08-17T02:01:18Z) - Optimal channel selection with discrete QCQP [14.734454356396158]
We propose a novel channel selection method that optimally selects channels via discrete QCQP.
We also propose a quadratic model that accurately estimates the actual inference time of the pruned network.
Our experiments on CIFAR-10 and ImageNet show our proposed pruning method outperforms other fixed-importance channel pruning methods on various network architectures.
arXiv Detail & Related papers (2022-02-24T23:26:51Z) - DGAFF: Deep Genetic Algorithm Fitness Formation for EEG Bio-Signal
Channel Selection [12.497603617622907]
Channel selection has been utilized to decrease data dimension and eliminate irrelevant channels.
We present a channel selection method, which combines a sequential search method with a genetic algorithm called Deep GA Fitness Formation.
The proposed method outperforms other channel selection methods in classifying motor imagery on the utilized dataset.
arXiv Detail & Related papers (2022-02-21T08:06:17Z) - Learning Signal Representations for EEG Cross-Subject Channel Selection
and Trial Classification [0.3553493344868413]
We introduce an algorithm for subject-independent channel selection of EEG recordings.
It exploits channel-specific 1D-Convolutional Neural Networks (1D-CNNs) as feature extractors in a supervised fashion to maximize class separability.
After training, the algorithm can be exploited by transferring only the parametrized subgroup of selected channel-specific 1D-CNNs to new signals from new subjects.
arXiv Detail & Related papers (2021-06-20T06:22:16Z) - AutoPruning for Deep Neural Network with Dynamic Channel Masking [28.018077874687343]
We propose a learning based auto pruning algorithm for deep neural network.
A two objectives' problem that aims for the the weights and the best channels for each layer is first formulated.
An alternative optimization approach is then proposed to derive the optimal channel numbers and weights simultaneously.
arXiv Detail & Related papers (2020-10-22T20:12:46Z) - Operation-Aware Soft Channel Pruning using Differentiable Masks [51.04085547997066]
We propose a data-driven algorithm, which compresses deep neural networks in a differentiable way by exploiting the characteristics of operations.
We perform extensive experiments and achieve outstanding performance in terms of the accuracy of output networks.
arXiv Detail & Related papers (2020-07-08T07:44:00Z) - Channel Equilibrium Networks for Learning Deep Representation [63.76618960820138]
This work shows that the combination of normalization and rectified linear function leads to inhibited channels.
Unlike prior arts that simply removed the inhibited channels, we propose to "wake them up" during training by designing a novel neural building block.
Channel Equilibrium (CE) block enables channels at the same layer to contribute equally to the learned representation.
arXiv Detail & Related papers (2020-02-29T09:02:31Z) - Discrimination-aware Network Pruning for Deep Model Compression [79.44318503847136]
Existing pruning methods either train from scratch with sparsity constraints or minimize the reconstruction error between the feature maps of the pre-trained models and the compressed ones.
We propose a simple-yet-effective method called discrimination-aware channel pruning (DCP) to choose the channels that actually contribute to the discriminative power.
Experiments on both image classification and face recognition demonstrate the effectiveness of our methods.
arXiv Detail & Related papers (2020-01-04T07:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.