A Deep Learning Approach for the Segmentation of Electroencephalography
Data in Eye Tracking Applications
- URL: http://arxiv.org/abs/2206.08672v1
- Date: Fri, 17 Jun 2022 10:17:24 GMT
- Title: A Deep Learning Approach for the Segmentation of Electroencephalography
Data in Eye Tracking Applications
- Authors: Lukas Wolf, Ard Kastrati, Martyna Beata P{\l}omecka, Jie-Ming Li,
Dustin Klebe, Alexander Veicht, Roger Wattenhofer, Nicolas Langer
- Abstract summary: We introduce DETRtime, a novel framework for time-series segmentation of EEG data.
Our end-to-end deep learning-based framework brings advances in Computer Vision to the forefront.
Our model generalizes well in the task of EEG sleep stage segmentation.
- Score: 56.458448869572294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The collection of eye gaze information provides a window into many critical
aspects of human cognition, health and behaviour. Additionally, many
neuroscientific studies complement the behavioural information gained from eye
tracking with the high temporal resolution and neurophysiological markers
provided by electroencephalography (EEG). One of the essential eye-tracking
software processing steps is the segmentation of the continuous data stream
into events relevant to eye-tracking applications, such as saccades, fixations,
and blinks.
Here, we introduce DETRtime, a novel framework for time-series segmentation
that creates ocular event detectors that do not require additionally recorded
eye-tracking modality and rely solely on EEG data. Our end-to-end deep
learning-based framework brings recent advances in Computer Vision to the
forefront of the times series segmentation of EEG data. DETRtime achieves
state-of-the-art performance in ocular event detection across diverse
eye-tracking experiment paradigms. In addition to that, we provide evidence
that our model generalizes well in the task of EEG sleep stage segmentation.
Related papers
- Focused State Recognition Using EEG with Eye Movement-Assisted Annotation [4.705434077981147]
Deep learning models for learning EEG and eye movement features proves effective in classifying brain activities.
A focused state indicates intense concentration on a task or thought. Distinguishing focused and unfocused states can be achieved through eye movement behaviors.
arXiv Detail & Related papers (2024-06-15T14:06:00Z) - Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning [65.54680361074882]
Eye-gaze Guided Multi-modal Alignment (EGMA) framework harnesses eye-gaze data for better alignment of medical visual and textual features.
We conduct downstream tasks of image classification and image-text retrieval on four medical datasets.
arXiv Detail & Related papers (2024-03-19T03:59:14Z) - Polar-Net: A Clinical-Friendly Model for Alzheimer's Disease Detection
in OCTA Images [53.235117594102675]
Optical Coherence Tomography Angiography is a promising tool for detecting Alzheimer's disease (AD) by imaging the retinal microvasculature.
We propose a novel deep-learning framework called Polar-Net to provide interpretable results and leverage clinical prior knowledge.
We show that Polar-Net outperforms existing state-of-the-art methods and provides more valuable pathological evidence for the association between retinal vascular changes and AD.
arXiv Detail & Related papers (2023-11-10T11:49:49Z) - More Than Meets the Eye: Analyzing Anesthesiologists' Visual Attention
in the Operating Room Using Deep Learning Models [0.0]
Currently, most studies employ wearable eye-tracking technologies to analyze anesthesiologists' visual patterns.
By utilizing a novel eye-tracking method in the form of deep learning models that process monitor-mounted webcams, we collected continuous behavioral data.
We distinguished between baseline VA distribution during uneventful periods to patterns associated with active phases or during critical, unanticipated incidents.
arXiv Detail & Related papers (2023-08-10T11:12:04Z) - An Interpretable and Attention-based Method for Gaze Estimation Using
Electroencephalography [8.09848629098117]
We leverage a large data set of simultaneously measured Electroencephalography (EEG) and Eye tracking, proposing an interpretable model for gaze estimation from EEG data.
We present a novel attention-based deep learning framework for EEG signal analysis, which allows the network to focus on the most relevant information in the signal and discard problematic channels.
arXiv Detail & Related papers (2023-08-09T16:58:01Z) - CLERA: A Unified Model for Joint Cognitive Load and Eye Region Analysis
in the Wild [18.79132232751083]
Real-time analysis of the dynamics of the eye region allows us to monitor humans' visual attention allocation and estimate their mental state.
We propose CLERA, which achieves precise keypoint detection andtemporal tracking in a joint-learning framework.
We also introduce a large-scale dataset of 30k human faces with joint pupil, eye-openness, and landmark annotation.
arXiv Detail & Related papers (2023-06-26T21:20:23Z) - Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis [54.60796004113496]
We demonstrate that the eye movement of radiologists reading medical images can be a new form of supervision to train the DNN-based computer-aided diagnosis (CAD) system.
We record the tracks of the radiologists' gaze when they are reading images.
The gaze information is processed and then used to supervise the DNN's attention via an Attention Consistency module.
arXiv Detail & Related papers (2022-04-06T08:31:05Z) - ALEBk: Feasibility Study of Attention Level Estimation via Blink
Detection applied to e-Learning [6.325464216802613]
We experimentally evaluate the relationship between the eye blink rate and the attention level of students captured during online sessions.
Results suggest an inverse correlation between the eye blink frequency and the attention level.
Our results open a new research line to introduce this technology for attention level estimation on future e-learning platforms.
arXiv Detail & Related papers (2021-12-16T19:23:56Z) - Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for
Reading Task Identification [79.41619843969347]
We present a new approach, that we call AdaGTCN, for identifying human reader intent from Electroencephalogram(EEG) and Eye movement(EM) data.
Our method, Adaptive Graph Temporal Convolution Network (AdaGTCN), uses an Adaptive Graph Learning Layer and Deep Neighborhood Graph Convolution Layer.
We compare our approach with several baselines to report an improvement of 6.29% on the ZuCo 2.0 dataset, along with extensive ablation experiments.
arXiv Detail & Related papers (2021-02-21T18:19:49Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.