EEG-Fest: Few-shot based Attention Network for Driver's Vigilance
Estimation with EEG Signals
- URL: http://arxiv.org/abs/2211.03878v2
- Date: Mon, 24 Apr 2023 19:07:10 GMT
- Title: EEG-Fest: Few-shot based Attention Network for Driver's Vigilance
Estimation with EEG Signals
- Authors: Ning Ding, Ce Zhang, Azim Eskandarian
- Abstract summary: A lack of driver's vigilance is the main cause of most vehicle crashes.
EEG has been reliable and efficient tool for drivers' drowsiness estimation.
- Score: 160.57870373052577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A lack of driver's vigilance is the main cause of most vehicle crashes.
Electroencephalography(EEG) has been reliable and efficient tool for drivers'
drowsiness estimation. Even though previous studies have developed accurate and
robust driver's vigilance detection algorithms, these methods are still facing
challenges on following areas: (a) small sample size training, (b) anomaly
signal detection, and (c) subject-independent classification. In this paper, we
propose a generalized few-shot model, namely EEG-Fest, to improve
aforementioned drawbacks. The EEG-Fest model can (a) classify the query
sample's drowsiness with a few samples, (b) identify whether a query sample is
anomaly signals or not, and (c) achieve subject independent classification. The
proposed algorithm achieves state-of-the-art results on the SEED-VIG dataset
and the SADT dataset. The accuracy of the drowsy class achieves 92% and 94% for
1-shot and 5-shot support samples in the SEED-VIG dataset, and 62% and 78% for
1-shot and 5-shot support samples in the SADT dataset.
Related papers
- How Low Can You Go? Surfacing Prototypical In-Distribution Samples for Unsupervised Anomaly Detection [48.30283806131551]
We show that UAD with extremely few training samples can already match -- and in some cases even surpass -- the performance of training with the whole training dataset.
We propose an unsupervised method to reliably identify prototypical samples to further boost UAD performance.
arXiv Detail & Related papers (2023-12-06T15:30:47Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Detecting Driver Drowsiness as an Anomaly Using LSTM Autoencoders [0.0]
An LSTM autoencoder-based architecture is utilized for drowsiness detection with ResNet-34 as feature extractor.
The proposed model achieves detection rate of 0.8740 area under curve (AUC) and is able to provide significant improvements on certain scenarios.
arXiv Detail & Related papers (2022-09-12T14:25:07Z) - Studying Drowsiness Detection Performance while Driving through Scalable
Machine Learning Models using Electroencephalography [0.0]
Driver drowsiness is one of the leading causes of traffic accidents.
Brain-Computer Interfaces (BCIs) and Machine Learning (ML) have enabled the detection of drivers' drowsiness.
This work presents an intelligent framework employing BCIs and features based on electroencephalography for detecting drowsiness in driving scenarios.
arXiv Detail & Related papers (2022-09-08T22:14:33Z) - Vector-Based Data Improves Left-Right Eye-Tracking Classifier
Performance After a Covariate Distributional Shift [0.0]
We propose a fine-grain data approach for EEG-ET data collection in order to create more robust benchmarking.
We train machine learning models utilizing both coarse-grain and fine-grain data and compare their accuracies when tested on data of similar/different distributional patterns.
Results showed that models trained on fine-grain, vector-based data were less susceptible to distributional shifts than models trained on coarse-grain, binary-classified data.
arXiv Detail & Related papers (2022-07-31T16:27:50Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.