A Novel Capsule Neural Network Based Model for Drowsiness Detection
Using Electroencephalography Signals
- URL: http://arxiv.org/abs/2204.01666v1
- Date: Mon, 4 Apr 2022 17:23:53 GMT
- Title: A Novel Capsule Neural Network Based Model for Drowsiness Detection
Using Electroencephalography Signals
- Authors: Luis Guarda, Juan Tapia, Enrique Lopez Droguett, Marcelo Ramos
- Abstract summary: Capsule Neural Networks are a brand-new Deep Learning algorithm proposed for work with reduced amounts of data.
This paper presents a Deep Learning-based method for drowsiness detection with CapsNet by using a concatenation of spectrogram images of the electroencephalography signals channels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The early detection of drowsiness has become vital to ensure the correct and
safe development of several industries' tasks. Due to the transient mental
state of a human subject between alertness and drowsiness, automated drowsiness
detection is a complex problem to tackle. The electroencephalography signals
allow us to record variations in an individual's brain's electrical potential,
where each of them gives specific information about a subject's mental state.
However, due to this type of signal's nature, its acquisition, in general, is
complex, so it is hard to have a large volume of data to apply techniques of
Deep Learning for processing and classification optimally. Nevertheless,
Capsule Neural Networks are a brand-new Deep Learning algorithm proposed for
work with reduced amounts of data. It is a robust algorithm to handle the
data's hierarchical relationships, which is an essential characteristic for
work with biomedical signals. Therefore, this paper presents a Deep
Learning-based method for drowsiness detection with CapsNet by using a
concatenation of spectrogram images of the electroencephalography signals
channels. The proposed CapsNet model is compared with a Convolutional Neural
Network, which is outperformed by the proposed model, which obtains an average
accuracy of 86,44% and 87,57% of sensitivity against an average accuracy of
75,86% and 79,47% sensitivity for the CNN, showing that CapsNet is more
suitable for this kind of datasets and tasks.
Related papers
- End-to-end Stroke imaging analysis, using reservoir computing-based effective connectivity, and interpretable Artificial intelligence [42.52549987351643]
We propose a reservoir computing-based and directed graph analysis pipeline.
The goal of this pipeline is to define an efficient brain representation for connectivity in stroke data.
This representation is used within a directed graph convolutional architecture and investigated with explainable artificial intelligence (AI) tools.
arXiv Detail & Related papers (2024-07-17T13:34:05Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Deep Learning for real-time neural decoding of grasp [0.0]
We present a Deep Learning-based approach to the decoding of neural signals for grasp type classification.
The main goal of the presented approach is to improve over state-of-the-art decoding accuracy without relying on any prior neuroscience knowledge.
arXiv Detail & Related papers (2023-11-02T08:26:29Z) - Neuromorphic Auditory Perception by Neural Spiketrum [27.871072042280712]
We introduce a neural spike coding model called spiketrumtemporal, to transform the time-varying analog signals into efficient spike patterns.
The model provides a sparse and efficient coding scheme with precisely controllable spike rate that facilitates training of spiking neural networks in various auditory perception tasks.
arXiv Detail & Related papers (2023-09-11T13:06:19Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Impact of spiking neurons leakages and network recurrences on
event-based spatio-temporal pattern recognition [0.0]
Spiking neural networks coupled with neuromorphic hardware and event-based sensors are getting increased interest for low-latency and low-power inference at the edge.
We explore the impact of synaptic and membrane leakages in spiking neurons.
arXiv Detail & Related papers (2022-11-14T21:34:02Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Information contraction in noisy binary neural networks and its
implications [11.742803725197506]
We consider noisy binary neural networks, where each neuron has a non-zero probability of producing an incorrect output.
Our key finding is a lower bound for the required number of neurons in noisy neural networks, which is first of its kind.
This paper offers new understanding of noisy information processing systems through the lens of information theory.
arXiv Detail & Related papers (2021-01-28T00:01:45Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.