Learning Patterns in Imaginary Vowels for an Intelligent Brain Computer
Interface (BCI) Design
- URL: http://arxiv.org/abs/2010.12066v2
- Date: Fri, 18 Feb 2022 18:50:59 GMT
- Title: Learning Patterns in Imaginary Vowels for an Intelligent Brain Computer
Interface (BCI) Design
- Authors: Parisa Ghane and Gahangir Hossain
- Abstract summary: We propose a modular framework for the recognition of vowels as the AI part of a brain computer interface system.
We carefully designed the modules to discriminate the English vowels given the raw EEG signals.
We provide the algorithms of the proposed framework to make it easy for future researchers and developers who want to follow the same workflow.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Technology advancements made it easy to measure non-invasive and high-quality
electroencephalograph (EEG) signals from human's brain. Hence, development of
robust and high-performance AI algorithms becomes crucial to properly process
the EEG signals and recognize the patterns, which lead to an appropriate
control signal. Despite the advancements in processing the motor imagery EEG
signals, the healthcare applications, such as emotion detection, are still in
the early stages of AI design. In this paper, we propose a modular framework
for the recognition of vowels as the AI part of a brain computer interface
system. We carefully designed the modules to discriminate the English vowels
given the raw EEG signals, and meanwhile avoid the typical issued with the
data-poor environments like most of the healthcare applications. The proposed
framework consists of appropriate signal segmentation, filtering, extraction of
spectral features, reducing the dimensions by means of principle component
analysis, and finally a multi-class classification by decision-tree-based
support vector machine (DT-SVM). The performance of our framework was evaluated
by a combination of test-set and resubstitution (also known as apparent) error
rates. We provide the algorithms of the proposed framework to make it easy for
future researchers and developers who want to follow the same workflow.
Related papers
- Towards Linguistic Neural Representation Learning and Sentence Retrieval from Electroencephalogram Recordings [27.418738450536047]
We propose a two-step pipeline for converting EEG signals into sentences.
We first confirm that word-level semantic information can be learned from EEG data recorded during natural reading.
We employ a training-free retrieval method to retrieve sentences based on the predictions from the EEG encoder.
arXiv Detail & Related papers (2024-08-08T03:40:25Z) - EEG decoding with conditional identification information [7.873458431535408]
Decoding EEG signals is crucial for unraveling human brain and advancing brain-computer interfaces.
Traditional machine learning algorithms have been hindered by the high noise levels and inherent inter-person variations in EEG signals.
Recent advances in deep neural networks (DNNs) have shown promise, owing to their advanced nonlinear modeling capabilities.
arXiv Detail & Related papers (2024-03-21T13:38:59Z) - Enhancing EEG-to-Text Decoding through Transferable Representations from Pre-trained Contrastive EEG-Text Masked Autoencoder [69.7813498468116]
We propose Contrastive EEG-Text Masked Autoencoder (CET-MAE), a novel model that orchestrates compound self-supervised learning across and within EEG and text.
We also develop a framework called E2T-PTR (EEG-to-Text decoding using Pretrained Transferable Representations) to decode text from EEG sequences.
arXiv Detail & Related papers (2024-02-27T11:45:21Z) - CSLP-AE: A Contrastive Split-Latent Permutation Autoencoder Framework
for Zero-Shot Electroencephalography Signal Conversion [49.1574468325115]
A key aim in EEG analysis is to extract the underlying neural activation (content) as well as to account for the individual subject variability (style)
Inspired by recent advancements in voice conversion technologies, we propose a novel contrastive split-latent permutation autoencoder (CSLP-AE) framework that directly optimize for EEG conversion.
arXiv Detail & Related papers (2023-11-13T22:46:43Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Task-Oriented Sensing, Computation, and Communication Integration for
Multi-Device Edge AI [108.08079323459822]
This paper studies a new multi-intelligent edge artificial-latency (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC)
We measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain.
arXiv Detail & Related papers (2022-07-03T06:57:07Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - Improving EEG Decoding via Clustering-based Multi-task Feature Learning [27.318646122939537]
Machine learning provides a promising technique to optimize EEG patterns toward better decoding accuracy.
Existing algorithms do not effectively explore the underlying data structure capturing the true EEG sample distribution.
We propose a clustering-based multi-task feature learning algorithm for improved EEG pattern decoding.
arXiv Detail & Related papers (2020-12-12T13:31:53Z) - Electroencephalography signal processing based on textural features for
monitoring the driver's state by a Brain-Computer Interface [3.613072342189595]
We investigate a textural processing method as an indicator to estimate the driver's vigilance in a hypothetical Brain-Computer Interface (BCI) system.
The novelty of the solution proposed relies on employing the one-dimensional Local Binary Pattern (1D-LBP) algorithm for feature extraction from pre-processed EEG data.
Our analysis allows to conclude that the 1D-LBP adoption has led to significant performance improvement.
arXiv Detail & Related papers (2020-10-13T14:16:00Z) - Data-Driven Symbol Detection via Model-Based Machine Learning [117.58188185409904]
We review a data-driven framework to symbol detection design which combines machine learning (ML) and model-based algorithms.
In this hybrid approach, well-known channel-model-based algorithms are augmented with ML-based algorithms to remove their channel-model-dependence.
Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship.
arXiv Detail & Related papers (2020-02-14T06:58:27Z) - Motor Imagery Classification of Single-Arm Tasks Using Convolutional
Neural Network based on Feature Refining [5.620334754517149]
Motor imagery (MI) is commonly used for recovery or rehabilitation of motor functions due to its signal origin.
In this study, we proposed a band-power feature refining convolutional neural network (BFR-CNN) to achieve high classification accuracy.
arXiv Detail & Related papers (2020-02-04T04:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.