Interpreting Imagined Speech Waves with Machine Learning techniques
- URL: http://arxiv.org/abs/2010.03360v2
- Date: Wed, 25 Nov 2020 15:42:44 GMT
- Title: Interpreting Imagined Speech Waves with Machine Learning techniques
- Authors: Abhiram Singh, Ashwin Gumaste
- Abstract summary: This work explores the possibility of decoding Imagined Speech (IS) signals which can be used to create a new design of Human-Computer Interface (HCI)
Since the underlying process generating EEG signals is unknown, various feature extraction methods, along with different neural network (NN) models, are used to approximate data distribution and classify IS signals.
- Score: 1.776746672434207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work explores the possibility of decoding Imagined Speech (IS) signals
which can be used to create a new design of Human-Computer Interface (HCI).
Since the underlying process generating EEG signals is unknown, various feature
extraction methods, along with different neural network (NN) models, are used
to approximate data distribution and classify IS signals. Based on the
experimental results, feed-forward NN model with ensemble and covariance matrix
transformed features showed the highest performance in comparison to other
existing methods. For comparison, three publicly available datasets were used.
We report a mean classification accuracy of 80% between rest and imagined
state, 96% and 80% for decoding long and short words on two datasets. These
results show that it is possible to differentiate brain signals (generated
during rest state) from the IS brain signals. Based on the experimental
results, we suggest that the word length and complexity can be used to decode
IS signals with high accuracy, and a BCI system can be designed with IS signals
for computer interaction. These ideas, and results give direction for the
development of a commercial level IS based BCI system, which can be used for
human-computer interaction in daily life.
Related papers
- Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI [6.926908480247951]
We propose a unified foundation model for EEG called Large Brain Model (LaBraM)
LaBraM enables cross-dataset learning by segmenting the EEG signals into EEG channel patches.
We then pre-train neural Transformers by predicting the original neural codes for the masked EEG channel patches.
arXiv Detail & Related papers (2024-05-29T05:08:16Z) - Brain-Driven Representation Learning Based on Diffusion Model [25.375490061512]
Denoising diffusion probabilistic models (DDPMs) are explored in our research as a means to address this issue.
Using DDPMs in conjunction with a conditional autoencoder, our new approach considerably outperforms traditional machine learning algorithms.
Our results highlight the potential of DDPMs as a sophisticated computational method for the analysis of speech-related EEG signals.
arXiv Detail & Related papers (2023-11-14T05:59:58Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters [72.19032452642728]
We propose a novel differentiable EEG decoding pipeline consisting of learnable filters and a pre-determined feature extraction module.
We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset and on a new EEG dataset of unprecedented size.
The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening.
arXiv Detail & Related papers (2021-10-19T14:22:04Z) - Ensemble of Convolution Neural Networks on Heterogeneous Signals for
Sleep Stage Scoring [63.30661835412352]
This paper explores and compares the convenience of using additional signals apart from electroencephalograms.
The best overall model, an ensemble of Depth-wise Separational Convolutional Neural Networks, has achieved an accuracy of 86.06%.
arXiv Detail & Related papers (2021-07-23T06:37:38Z) - A Deep Neural Network for SSVEP-based Brain-Computer Interfaces [3.0595138995552746]
Target identification in brain-computer interface (BCI) spellers refers to the electroencephalogram (EEG) classification for predicting the target character that the subject intends to spell.
In this setting, we address the target identification and propose a novel deep neural network (DNN) architecture.
The proposed DNN processes the multi-channel SSVEP with convolutions across the sub-bands of harmonics, channels, time, and classifies at the fully connected layer.
arXiv Detail & Related papers (2020-11-17T11:11:19Z) - A Novel Deep Learning Architecture for Decoding Imagined Speech from EEG [2.4063592468412267]
We present a novel architecture that employs deep neural network (DNN) for classifying the words "in" and "cooperate"
Nine EEG channels, which best capture the underlying cortical activity, are chosen using common spatial pattern.
We have achieved accuracies comparable to the state-of-the-art results.
arXiv Detail & Related papers (2020-03-19T00:57:40Z) - Data-Driven Symbol Detection via Model-Based Machine Learning [117.58188185409904]
We review a data-driven framework to symbol detection design which combines machine learning (ML) and model-based algorithms.
In this hybrid approach, well-known channel-model-based algorithms are augmented with ML-based algorithms to remove their channel-model-dependence.
Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship.
arXiv Detail & Related papers (2020-02-14T06:58:27Z) - Classification of High-Dimensional Motor Imagery Tasks based on An
End-to-end role assigned convolutional neural network [21.984302611206537]
We propose an end-to-end role assigned convolutional neural network (ERA-CNN) which considers discriminative features of each upper limb region.
We demonstrate the possibility of decoding user intention by using only EEG signals with robust performance using ERA-CNN.
arXiv Detail & Related papers (2020-02-01T14:06:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.