Generative Autoencoder Kernels on Deep Learning for Brain Activity
Analysis
- URL: http://arxiv.org/abs/2101.10263v1
- Date: Thu, 21 Jan 2021 08:19:47 GMT
- Title: Generative Autoencoder Kernels on Deep Learning for Brain Activity
Analysis
- Authors: Gokhan Altan, Yakup Kutlu
- Abstract summary: Hessenberg decomposition-based ELM autoencoder (HessELM-AE) is a novel kernel to generate different presentations of the input data.
The aim of the study is analyzing the performance of the novel Deep AE kernel for clinical availability on electroencephalogram (EEG) with stroke patients.
- Score: 3.04585143845864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Learning (DL) is a two-step classification model that consists feature
learning, generating feature representations using unsupervised ways and the
supervised learning stage at the last step of model using at least two hidden
layers on the proposed structures by fully connected layers depending on of the
artificial neural networks. The optimization of the predefined classification
parameters for the supervised models eases reaching the global optimality with
exact zero training error. The autoencoder (AE) models are the highly
generalized ways of the unsupervised stages for the DL to define the output
weights of the hidden neurons with various representations. As alternatively to
the conventional Extreme Learning Machines (ELM) AE, Hessenberg
decomposition-based ELM autoencoder (HessELM-AE) is a novel kernel to generate
different presentations of the input data within the intended sizes of the
models. The aim of the study is analyzing the performance of the novel Deep AE
kernel for clinical availability on electroencephalogram (EEG) with stroke
patients. The slow cortical potentials (SCP) training in stroke patients during
eight neurofeedback sessions were analyzed using Hilbert-Huang Transform. The
statistical features of different frequency modulations were fed into the Deep
ELM model for generative AE kernels. The novel Deep ELM-AE kernels have
discriminated the brain activity with high classification performances for
positivity and negativity tasks in stroke patients.
Related papers
- FoME: A Foundation Model for EEG using Adaptive Temporal-Lateral Attention Scaling [19.85701025524892]
FoME (Foundation Model for EEG) is a novel approach using adaptive temporal-lateral attention scaling.
FoME is pre-trained on a diverse 1.7TB dataset of scalp and intracranial EEG recordings, comprising 745M parameters trained for 1,096k steps.
arXiv Detail & Related papers (2024-09-19T04:22:40Z) - RISE-iEEG: Robust to Inter-Subject Electrodes Implantation Variability iEEG Classifier [0.0]
RISE-iEEG stands for Robust Inter-Subject Electrode Implantation Variability iEEG.
We developed an iEEG decoder model that can be applied across multiple patients' data without requiring the coordinates of electrode for each patient.
Our analysis shows that the performance of RISE-iEEG is 10% higher than that of HTNet and EEGNet in terms of F1 score.
arXiv Detail & Related papers (2024-08-12T18:33:19Z) - Enhancing Cognitive Workload Classification Using Integrated LSTM Layers and CNNs for fNIRS Data Analysis [13.74551296919155]
This paper explores the im-pact of Long Short-Term Memory layers on the effectiveness of Convolutional Neural Networks (CNNs) within deep learning models.
By integrating LSTM layers, the model can capture temporal dependencies in the fNIRS data, al-lowing for a more comprehensive understanding of cognitive states.
arXiv Detail & Related papers (2024-07-22T11:28:34Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Interpreting Deep Learning Models for Epileptic Seizure Detection on EEG
signals [4.748221780751802]
Deep Learning (DL) is often considered the state-of-the art for Artificial Intelligence-based medical decision support.
It remains sparsely implemented in clinical practice and poorly trusted by clinicians due to insufficient interpretability of neural network models.
We have tackled this issue by developing interpretable DL models in the context of online detection of epileptic seizure, based on EEG signal.
arXiv Detail & Related papers (2020-12-22T11:10:23Z) - A Systematic Approach to Featurization for Cancer Drug Sensitivity
Predictions with Deep Learning [49.86828302591469]
We train >35,000 neural network models, sweeping over common featurization techniques.
We found the RNA-seq to be highly redundant and informative even with subsets larger than 128 features.
arXiv Detail & Related papers (2020-04-30T20:42:17Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.