Signal2Image Modules in Deep Neural Networks for EEG Classification
- URL: http://arxiv.org/abs/1904.13216v9
- Date: Mon, 13 Nov 2023 18:17:33 GMT
- Title: Signal2Image Modules in Deep Neural Networks for EEG Classification
- Authors: Paschalis Bizopoulos, George I Lambrou and Dimitrios Koutsouris
- Abstract summary: We define the term Signal2Image (S2Is) as trainable or non-trainable prefix modules that convert signals to image-like representations.
We compare the accuracy and time performance of four S2Is (signal as image', spectrogram, one and two layer Convolutional Neural Networks (CNNs)) combined with a set of base models' (LeNet, AlexNet, VGGnet, ResNet, DenseNet) along with the depth-wise and 1D variations of the latter.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has revolutionized computer vision utilizing the increased
availability of big data and the power of parallel computational units such as
graphical processing units. The vast majority of deep learning research is
conducted using images as training data, however the biomedical domain is rich
in physiological signals that are used for diagnosis and prediction problems.
It is still an open research question how to best utilize signals to train deep
neural networks.
In this paper we define the term Signal2Image (S2Is) as trainable or
non-trainable prefix modules that convert signals, such as
Electroencephalography (EEG), to image-like representations making them
suitable for training image-based deep neural networks defined as `base
models'. We compare the accuracy and time performance of four S2Is (`signal as
image', spectrogram, one and two layer Convolutional Neural Networks (CNNs))
combined with a set of `base models' (LeNet, AlexNet, VGGnet, ResNet, DenseNet)
along with the depth-wise and 1D variations of the latter. We also provide
empirical evidence that the one layer CNN S2I performs better in eleven out of
fifteen tested models than non-trainable S2Is for classifying EEG signals and
we present visual comparisons of the outputs of the S2Is.
Related papers
- Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Classification of EEG Motor Imagery Using Deep Learning for
Brain-Computer Interface Systems [79.58173794910631]
A trained T1 class Convolutional Neural Network (CNN) model will be used to examine its ability to successfully identify motor imagery.
In theory, and if the model has been trained accurately, it should be able to identify a class and label it accordingly.
The CNN model will then be restored and used to try and identify the same class of motor imagery data using much smaller sampled data.
arXiv Detail & Related papers (2022-05-31T17:09:46Z) - Unsupervised Denoising of Optical Coherence Tomography Images with
Dual_Merged CycleWGAN [3.3909577600092122]
We propose a new Cycle-Consistent Generative Adversarial Nets called Dual-Merged Cycle-WGAN for retinal OCT image denoiseing.
Our model consists of two Cycle-GAN networks with imporved generator, descriminator and wasserstein loss to achieve good training stability and better performance.
arXiv Detail & Related papers (2022-05-02T07:38:19Z) - EEG-ITNet: An Explainable Inception Temporal Convolutional Network for
Motor Imagery Classification [0.5616884466478884]
We propose an end-to-end deep learning architecture called EEG-ITNet.
Our model can extract rich spectral, spatial, and temporal information from multi-channel EEG signals.
EEG-ITNet shows up to 5.9% improvement in the classification accuracy in different scenarios.
arXiv Detail & Related papers (2022-04-14T13:18:43Z) - HistoTransfer: Understanding Transfer Learning for Histopathology [9.231495418218813]
We compare the performance of features extracted from networks trained on ImageNet and histopathology data.
We investigate if features learned using more complex networks lead to gain in performance.
arXiv Detail & Related papers (2021-06-13T18:55:23Z) - Variational models for signal processing with Graph Neural Networks [3.5939555573102853]
This paper is devoted to signal processing on point-clouds by means of neural networks.
In this work, we investigate the use of variational models for such Graph Neural Networks to process signals on graphs for unsupervised learning.
arXiv Detail & Related papers (2021-03-30T13:31:11Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - On the interplay between physical and content priors in deep learning
for computational imaging [5.486833154281385]
We use the Phase Extraction Neural Network (PhENN) for quantitative phase retrieval in a lensless phase imaging system.
We show that the two questions are related and share a common crux: the choice of the training examples.
We also discover that weaker regularization effect leads to better learning of the underlying propagation model.
arXiv Detail & Related papers (2020-04-14T08:36:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.