Towards Scalable Handwriting Communication via EEG Decoding and Latent Embedding Integration
- URL: http://arxiv.org/abs/2411.09170v1
- Date: Thu, 14 Nov 2024 04:12:47 GMT
- Title: Towards Scalable Handwriting Communication via EEG Decoding and Latent Embedding Integration
- Authors: Jun-Young Kim, Deok-Seon Kim, Seo-Hyun Lee,
- Abstract summary: This study centers on a written alphabet classification task, where we aim to decode EEG signals associated with handwriting.
We incorporate hand kinematics to guide the extraction of the consistent embeddings from high-dimensional neural recordings.
A parallel convolutional neural network model is used to extract features from both data sources simultaneously.
Our approach achieves a classification accuracy of 91 % for the nine-class task, demonstrating the feasibility of fine-grained handwriting decoding from EEG.
- Score: 4.403687945412488
- License:
- Abstract: In recent years, brain-computer interfaces have made advances in decoding various motor-related tasks, including gesture recognition and movement classification, utilizing electroencephalogram (EEG) data. These developments are fundamental in exploring how neural signals can be interpreted to recognize specific physical actions. This study centers on a written alphabet classification task, where we aim to decode EEG signals associated with handwriting. To achieve this, we incorporate hand kinematics to guide the extraction of the consistent embeddings from high-dimensional neural recordings using auxiliary variables (CEBRA). These CEBRA embeddings, along with the EEG, are processed by a parallel convolutional neural network model that extracts features from both data sources simultaneously. The model classifies nine different handwritten characters, including symbols such as exclamation marks and commas, within the alphabet. We evaluate the model using a quantitative five-fold cross-validation approach and explore the structure of the embedding space through visualizations. Our approach achieves a classification accuracy of 91 % for the nine-class task, demonstrating the feasibility of fine-grained handwriting decoding from EEG.
Related papers
- Visual Neural Decoding via Improved Visual-EEG Semantic Consistency [3.4061238650474657]
Methods that directly map EEG features to the CLIP embedding space may introduce mapping bias and cause semantic inconsistency.
We propose a Visual-EEG Semantic Decouple Framework that explicitly extracts the semantic-related features of these two modalities to facilitate optimal alignment.
Our method achieves state-of-the-art results in zero-shot neural decoding tasks.
arXiv Detail & Related papers (2024-08-13T10:16:10Z) - Towards Linguistic Neural Representation Learning and Sentence Retrieval from Electroencephalogram Recordings [27.418738450536047]
We propose a two-step pipeline for converting EEG signals into sentences.
We first confirm that word-level semantic information can be learned from EEG data recorded during natural reading.
We employ a training-free retrieval method to retrieve sentences based on the predictions from the EEG encoder.
arXiv Detail & Related papers (2024-08-08T03:40:25Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - A Convolutional Spiking Network for Gesture Recognition in
Brain-Computer Interfaces [0.8122270502556371]
We propose a simple yet efficient machine learning-based approach for the exemplary problem of hand gesture classification based on brain signals.
We demonstrate that this approach generalizes to different subjects with both EEG and ECoG data and achieves superior accuracy in the range of 92.74-97.07%.
arXiv Detail & Related papers (2023-04-21T16:23:40Z) - Hierarchical Sketch Induction for Paraphrase Generation [79.87892048285819]
We introduce Hierarchical Refinement Quantized Variational Autoencoders (HRQ-VAE), a method for learning decompositions of dense encodings.
We use HRQ-VAE to encode the syntactic form of an input sentence as a path through the hierarchy, allowing us to more easily predict syntactic sketches at test time.
arXiv Detail & Related papers (2022-03-07T15:28:36Z) - Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot
Sentiment Classification [78.120927891455]
State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks.
In this paper, we extend the problem to open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence decoding and zero-shot sentence sentiment classification on natural reading tasks.
Our model achieves a 40.1% BLEU-1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines.
arXiv Detail & Related papers (2021-12-05T21:57:22Z) - EEG-based Cross-Subject Driver Drowsiness Recognition with an
Interpretable Convolutional Neural Network [0.0]
We develop a novel convolutional neural network combined with an interpretation technique that allows sample-wise analysis of important features for classification.
Results show that the model achieves an average accuracy of 78.35% on 11 subjects for leave-one-out cross-subject recognition.
arXiv Detail & Related papers (2021-05-30T14:47:20Z) - Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for
Reading Task Identification [79.41619843969347]
We present a new approach, that we call AdaGTCN, for identifying human reader intent from Electroencephalogram(EEG) and Eye movement(EM) data.
Our method, Adaptive Graph Temporal Convolution Network (AdaGTCN), uses an Adaptive Graph Learning Layer and Deep Neighborhood Graph Convolution Layer.
We compare our approach with several baselines to report an improvement of 6.29% on the ZuCo 2.0 dataset, along with extensive ablation experiments.
arXiv Detail & Related papers (2021-02-21T18:19:49Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Learning from Heterogeneous EEG Signals with Differentiable Channel
Reordering [51.633889765162685]
CHARM is a method for training a single neural network across inconsistent input channels.
We perform experiments on four EEG classification datasets and demonstrate the efficacy of CHARM.
arXiv Detail & Related papers (2020-10-21T12:32:34Z) - Classification of High-Dimensional Motor Imagery Tasks based on An
End-to-end role assigned convolutional neural network [21.984302611206537]
We propose an end-to-end role assigned convolutional neural network (ERA-CNN) which considers discriminative features of each upper limb region.
We demonstrate the possibility of decoding user intention by using only EEG signals with robust performance using ERA-CNN.
arXiv Detail & Related papers (2020-02-01T14:06:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.