Focused State Recognition Using EEG with Eye Movement-Assisted   Annotation
        - URL: http://arxiv.org/abs/2407.09508v1
 - Date: Sat, 15 Jun 2024 14:06:00 GMT
 - Title: Focused State Recognition Using EEG with Eye Movement-Assisted   Annotation
 - Authors: Tian-Hua Li, Tian-Fang Ma, Dan Peng, Wei-Long Zheng, Bao-Liang Lu, 
 - Abstract summary: Deep learning models for learning EEG and eye movement features proves effective in classifying brain activities.
A focused state indicates intense concentration on a task or thought. Distinguishing focused and unfocused states can be achieved through eye movement behaviors.
 - Score: 4.705434077981147
 - License: http://creativecommons.org/licenses/by-nc-nd/4.0/
 - Abstract:   With the rapid advancement in machine learning, the recognition and analysis of brain activity based on EEG and eye movement signals have attained a high level of sophistication. Utilizing deep learning models for learning EEG and eye movement features proves effective in classifying brain activities. A focused state indicates intense concentration on a task or thought. Distinguishing focused and unfocused states can be achieved through eye movement behaviors, reflecting variations in brain activities. By calculating binocular focusing point disparity in eye movement signals and integrating relevant EEG features, we propose an annotation method for focused states. The resulting comprehensive dataset, derived from raw data processed through a bio-acquisition device, includes both EEG features and focused labels annotated by eye movements. Extensive training and testing on several deep learning models, particularly the Transformer, yielded a 90.16% accuracy on the subject-dependent experiments. The validity of this approach was demonstrated, with cross-subject experiments, key frequency band and brain region analyses confirming its generalizability and providing physiological explanations. 
 
       
      
        Related papers
        - CAST-Phys: Contactless Affective States Through Physiological signals   Database [74.28082880875368]
The lack of affective multi-modal datasets remains a major bottleneck in developing accurate emotion recognition systems.<n>We present the Contactless Affective States Through Physiological Signals Database (CAST-Phys), a novel high-quality dataset capable of remote physiological emotion recognition.<n>Our analysis highlights the crucial role of physiological signals in realistic scenarios where facial expressions alone may not provide sufficient emotional information.
arXiv  Detail & Related papers  (2025-07-08T15:20:24Z) - BrainOmni: A Brain Foundation Model for Unified EEG and MEG Signals [50.76802709706976]
This paper proposes Brain Omni, the first brain foundation model that generalises across heterogeneous EEG and MEG recordings.<n>To unify diverse data sources, we introduce BrainTokenizer, the first tokenizer that quantises neural brain activity into discrete representations.<n>A total of 1,997 hours of EEG and 656 hours of MEG data are curated and standardised from publicly available sources for pretraining.
arXiv  Detail & Related papers  (2025-05-18T14:07:14Z) - Feature Estimation of Global Language Processing in EEG Using Attention   Maps [5.173821279121835]
This study introduces a novel approach to EEG feature estimation that utilizes the weights of deep learning models to explore this association.
We demonstrate that attention maps generated from Vision Transformers and EEGNet effectively identify features that align with findings from prior studies.
The application of Mel-Spectrogram with ViTs enhances the resolution of temporal and frequency-related EEG characteristics.
arXiv  Detail & Related papers  (2024-09-27T22:52:31Z) - Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject   Emotion Recognition [23.505616142198487]
We develop a Pre-trained model based Multimodal Mood Reader for cross-subject emotion recognition.
The model learns universal latent representations of EEG signals through pre-training on large scale dataset.
Extensive experiments on public datasets demonstrate Mood Reader's superior performance in cross-subject emotion recognition tasks.
arXiv  Detail & Related papers  (2024-05-28T14:31:11Z) - A Knowledge-Driven Cross-view Contrastive Learning for EEG
  Representation [48.85731427874065]
This paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2) to extract effective representations from EEG with limited labels.
The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity.
By modeling prior neural knowledge based on neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations.
arXiv  Detail & Related papers  (2023-09-21T08:53:51Z) - An Interpretable and Attention-based Method for Gaze Estimation Using
  Electroencephalography [8.09848629098117]
We leverage a large data set of simultaneously measured Electroencephalography (EEG) and Eye tracking, proposing an interpretable model for gaze estimation from EEG data.
We present a novel attention-based deep learning framework for EEG signal analysis, which allows the network to focus on the most relevant information in the signal and discard problematic channels.
arXiv  Detail & Related papers  (2023-08-09T16:58:01Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv  Detail & Related papers  (2022-10-27T22:20:36Z) - A Deep Learning Approach for the Segmentation of Electroencephalography
  Data in Eye Tracking Applications [56.458448869572294]
We introduce DETRtime, a novel framework for time-series segmentation of EEG data.
Our end-to-end deep learning-based framework brings advances in Computer Vision to the forefront.
Our model generalizes well in the task of EEG sleep stage segmentation.
arXiv  Detail & Related papers  (2022-06-17T10:17:24Z) - Emotional EEG Classification using Connectivity Features and
  Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv  Detail & Related papers  (2021-01-18T13:28:08Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
  Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv  Detail & Related papers  (2020-09-21T02:42:30Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
  Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv  Detail & Related papers  (2020-01-31T17:47:16Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.