REST: Robust and Efficient Neural Networks for Sleep Monitoring in the
Wild
- URL: http://arxiv.org/abs/2001.11363v1
- Date: Wed, 29 Jan 2020 17:23:16 GMT
- Title: REST: Robust and Efficient Neural Networks for Sleep Monitoring in the
Wild
- Authors: Rahul Duggal, Scott Freitas, Cao Xiao, Duen Horng Chau, Jimeng Sun
- Abstract summary: We propose REST, a new method that simultaneously tackles both issues via adversarial training and controlling the Lipschitz constant of the neural network.
We demonstrate that REST produces highly-robust and efficient models that substantially outperform the original full-sized models in the presence of noise.
By deploying these models to an Android application on a smartphone, we quantitatively observe that REST allows models to achieve up to 17x energy reduction and 9x faster inference.
- Score: 62.36144064259933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, significant attention has been devoted towards integrating
deep learning technologies in the healthcare domain. However, to safely and
practically deploy deep learning models for home health monitoring, two
significant challenges must be addressed: the models should be (1) robust
against noise; and (2) compact and energy-efficient. We propose REST, a new
method that simultaneously tackles both issues via 1) adversarial training and
controlling the Lipschitz constant of the neural network through spectral
regularization while 2) enabling neural network compression through sparsity
regularization. We demonstrate that REST produces highly-robust and efficient
models that substantially outperform the original full-sized models in the
presence of noise. For the sleep staging task over single-channel
electroencephalogram (EEG), the REST model achieves a macro-F1 score of 0.67
vs. 0.39 achieved by a state-of-the-art model in the presence of Gaussian noise
while obtaining 19x parameter reduction and 15x MFLOPS reduction on two large,
real-world EEG datasets. By deploying these models to an Android application on
a smartphone, we quantitatively observe that REST allows models to achieve up
to 17x energy reduction and 9x faster inference. We open-source the code
repository with this paper: https://github.com/duggalrahul/REST.
Related papers
- Truncated Consistency Models [57.50243901368328]
Training consistency models requires learning to map all intermediate points along PF ODE trajectories to their corresponding endpoints.
We empirically find that this training paradigm limits the one-step generation performance of consistency models.
We propose a new parameterization of the consistency function and a two-stage training procedure that prevents the truncated-time training from collapsing to a trivial solution.
arXiv Detail & Related papers (2024-10-18T22:38:08Z) - Exploring Green AI for Audio Deepfake Detection [21.17957700009653]
State-of-the-art audio deepfake detectors leveraging deep neural networks exhibit impressive recognition performance.
Deep NLP models produce around 626k lbs of COtextsubscript2 which is equivalent to five times of average US car emission at its lifetime.
This study presents a novel framework for audio deepfake detection that can be seamlessly trained using standard CPU resources.
arXiv Detail & Related papers (2024-03-21T10:54:21Z) - EDAC: Efficient Deployment of Audio Classification Models For COVID-19
Detection [0.0]
The global spread of COVID-19 had severe consequences for public health and the world economy.
Various researchers made use of machine learning methods in an attempt to detect COVID-19.
The solutions leverage various input features, such as CT scans or cough audio signals, with state-of-the-art results arising from deep neural network architectures.
To address this, we first recreated two models that use cough audio recordings to detect COVID-19.
arXiv Detail & Related papers (2023-09-11T10:07:51Z) - A CNN-Transformer Deep Learning Model for Real-time Sleep Stage
Classification in an Energy-Constrained Wireless Device [2.5672176409865686]
This paper proposes a deep learning (DL) model for automatic sleep stage classification based on single-channel EEG data.
The model was designed to run on energy and memory-constrained devices for real-time operation with local processing.
We tested a reduced-sized version of the proposed model on a low-cost Arduino Nano 33 BLE board and it was fully functional and accurate.
arXiv Detail & Related papers (2022-11-20T16:22:30Z) - Go Beyond Multiple Instance Neural Networks: Deep-learning Models based
on Local Pattern Aggregation [0.0]
convolutional neural networks (CNNs) have brought breakthroughs in processing clinical electrocardiograms (ECGs) and speaker-independent speech.
In this paper, we propose local pattern aggregation-based deep-learning models to effectively deal with both problems.
The novel network structure, called LPANet, has cropping and aggregation operations embedded into it.
arXiv Detail & Related papers (2022-05-28T13:18:18Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Dynamic Model Pruning with Feedback [64.019079257231]
We propose a novel model compression method that generates a sparse trained model without additional overhead.
We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models.
arXiv Detail & Related papers (2020-06-12T15:07:08Z) - TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids [13.369813069254132]
We use model compression techniques to bridge the gap between large neural networks and battery powered hearing aid hardware.
We are the first to demonstrate their efficacy for RNN speech enhancement, using pruning and integer quantization of weights/activations.
Our model achieves a computational latency of 2.39ms, well within the 10ms target and 351$times$ better than previous work.
arXiv Detail & Related papers (2020-05-20T20:37:47Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.