MorpheusNet: Resource efficient sleep stage classifier for embedded
on-line systems
- URL: http://arxiv.org/abs/2401.10284v1
- Date: Sun, 14 Jan 2024 17:52:08 GMT
- Title: MorpheusNet: Resource efficient sleep stage classifier for embedded
on-line systems
- Authors: Ali Kavoosi, Morgan P. Mitchell, Raveen Kariyawasam, John E. Fleming,
Penny Lewis, Heidi Johansen-Berg, Hayriye Cagnan, Timothy Denison
- Abstract summary: Sleep Stage Classification (SSC) is a labor-intensive task, requiring experts to examine hours of electrophysiological recordings for manual classification.
With increasing affordability and expansion of wearable devices, SSC may enable deployment of sleep-based therapies at scale.
Deep Learning has gained increasing attention as a potential method to automate this process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Sleep Stage Classification (SSC) is a labor-intensive task, requiring experts
to examine hours of electrophysiological recordings for manual classification.
This is a limiting factor when it comes to leveraging sleep stages for
therapeutic purposes. With increasing affordability and expansion of wearable
devices, automating SSC may enable deployment of sleep-based therapies at
scale. Deep Learning has gained increasing attention as a potential method to
automate this process. Previous research has shown accuracy comparable to
manual expert scores. However, previous approaches require sizable amount of
memory and computational resources. This constrains the ability to classify in
real time and deploy models on the edge. To address this gap, we aim to provide
a model capable of predicting sleep stages in real-time, without requiring
access to external computational sources (e.g., mobile phone, cloud). The
algorithm is power efficient to enable use on embedded battery powered systems.
Our compact sleep stage classifier can be deployed on most off-the-shelf
microcontrollers (MCU) with constrained hardware settings. This is due to the
memory footprint of our approach requiring significantly fewer operations. The
model was tested on three publicly available data bases and achieved
performance comparable to the state of the art, whilst reducing model
complexity by orders of magnitude (up to 280 times smaller compared to state of
the art). We further optimized the model with quantization of parameters to 8
bits with only an average drop of 0.95% in accuracy. When implemented in
firmware, the quantized model achieves a latency of 1.6 seconds on an Arm
CortexM4 processor, allowing its use for on-line SSC-based therapies.
Related papers
- Annotating sleep states in children from wrist-worn accelerometer data
using Machine Learning [4.506099292980221]
We propose to model the accelerometer data using different machine learning (ML) techniques such as support vectors, boosting, ensemble methods, and more complex approaches involving LSTMs and Region-based CNNs.
Later, we aim to evaluate these approaches using the Event Detection Average Precision (EDAP) score (similar to the IOU metric) to eventually compare the predictive power and model performance.
arXiv Detail & Related papers (2023-12-09T09:10:39Z) - DeepGEMM: Accelerated Ultra Low-Precision Inference on CPU Architectures
using Lookup Tables [49.965024476651706]
DeepGEMM is a lookup table based approach for the execution of ultra low-precision convolutional neural networks on SIMD hardware.
Our implementation outperforms corresponding 8-bit integer kernels by up to 1.74x on x86 platforms.
arXiv Detail & Related papers (2023-04-18T15:13:10Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - A CNN-Transformer Deep Learning Model for Real-time Sleep Stage
Classification in an Energy-Constrained Wireless Device [2.5672176409865686]
This paper proposes a deep learning (DL) model for automatic sleep stage classification based on single-channel EEG data.
The model was designed to run on energy and memory-constrained devices for real-time operation with local processing.
We tested a reduced-sized version of the proposed model on a low-cost Arduino Nano 33 BLE board and it was fully functional and accurate.
arXiv Detail & Related papers (2022-11-20T16:22:30Z) - A Closed-loop Sleep Modulation System with FPGA-Accelerated Deep
Learning [1.5569382274788235]
We develop a sleep modulation system that supports closed-loop operations on a low-power field-programmable gate array (FPGA) device.
Deep learning (DL) model is accelerated by a low-power field-programmable gate array (FPGA) device.
Model has been validated using a public sleep database containing 81 subjects, achieving a state-of-the-art classification accuracy of 85.8% and a F1-score of 79%.
arXiv Detail & Related papers (2022-11-19T01:47:53Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Do Not Sleep on Linear Models: Simple and Interpretable Techniques
Outperform Deep Learning for Sleep Scoring [1.6339105551302067]
We argue that most deep learning solutions for sleep scoring are limited in their real-world applicability as they are hard to train, deploy, and reproduce.
In this work, we revisit the problem of sleep stage classification using classical machine learning.
Results show that state-of-the-art performance can be achieved with a conventional machine learning pipeline.
arXiv Detail & Related papers (2022-07-15T21:03:11Z) - A TinyML Platform for On-Device Continual Learning with Quantized Latent
Replays [66.62377866022221]
Latent Replay-based Continual Learning (CL) techniques enable online, serverless adaptation in principle.
We introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power processor.
Our results show that by combining these techniques, continual learning can be achieved in practice using less than 64MB of memory.
arXiv Detail & Related papers (2021-10-20T11:01:23Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - REST: Robust and Efficient Neural Networks for Sleep Monitoring in the
Wild [62.36144064259933]
We propose REST, a new method that simultaneously tackles both issues via adversarial training and controlling the Lipschitz constant of the neural network.
We demonstrate that REST produces highly-robust and efficient models that substantially outperform the original full-sized models in the presence of noise.
By deploying these models to an Android application on a smartphone, we quantitatively observe that REST allows models to achieve up to 17x energy reduction and 9x faster inference.
arXiv Detail & Related papers (2020-01-29T17:23:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.