C$^2$SP-Net: Joint Compression and Classification Network for Epilepsy
Seizure Prediction
- URL: http://arxiv.org/abs/2110.13674v1
- Date: Tue, 26 Oct 2021 13:09:16 GMT
- Title: C$^2$SP-Net: Joint Compression and Classification Network for Epilepsy
Seizure Prediction
- Authors: Di Wu, Yi Shi, Ziyu Wang, Jie Yang, Mohamad Sawan
- Abstract summary: We propose C$2$SP-Net, to jointly solve compression, prediction, and reconstruction with a single neural network.
A plug-and-play in-sensor compression matrix is constructed to reduce transmission bandwidth requirement.
Our proposed method produces an average loss of 0.35 % in prediction accuracy with a compression ratio ranging from 1/2 to 1/16.
- Score: 10.21441881111824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent development in brain-machine interface technology has made seizure
prediction possible. However, the communication of large volume of
electrophysiological signals between sensors and processing apparatus and
related computation become two major bottlenecks for seizure prediction systems
due to the constrained bandwidth and limited computation resource, especially
for wearable and implantable medical devices. Although compressive sensing (CS)
can be adopted to compress the signals to reduce communication bandwidth
requirement, it needs a complex reconstruction procedure before the signal can
be used for seizure prediction. In this paper, we propose C$^2$SP-Net, to
jointly solve compression, prediction, and reconstruction with a single neural
network. A plug-and-play in-sensor compression matrix is constructed to reduce
transmission bandwidth requirement. The compressed signal can be used for
seizure prediction without additional reconstruction steps. Reconstruction of
the original signal can also be carried out in high fidelity. Prediction
accuracy, sensitivity, false prediction rate, and reconstruction quality of the
proposed framework are evaluated under various compression ratios. The
experimental results illustrate that our model outperforms the competitive
state-of-the-art baselines by a large margin in prediction accuracy. In
particular, our proposed method produces an average loss of 0.35 % in
prediction accuracy with a compression ratio ranging from 1/2 to 1/16.
Related papers
- Coarse-to-fine Deep Video Coding with Hyperprior-guided Mode Prediction [50.361427832256524]
We propose a coarse-to-fine (C2F) deep video compression framework for better motion compensation.
Our C2F framework can achieve better motion compensation results without significantly increasing bit costs.
arXiv Detail & Related papers (2022-06-15T11:38:53Z) - A Theoretical Understanding of Neural Network Compression from Sparse
Linear Approximation [37.525277809849776]
The goal of model compression is to reduce the size of a large neural network while retaining a comparable performance.
We use sparsity-sensitive $ell_q$-norm to characterize compressibility and provide a relationship between soft sparsity of the weights in the network and the degree of compression.
We also develop adaptive algorithms for pruning each neuron in the network informed by our theory.
arXiv Detail & Related papers (2022-06-11T20:10:35Z) - Binary Single-dimensional Convolutional Neural Network for Seizure
Prediction [4.42106872060105]
We propose a hardware-friendly network called Binary Single-dimensional Convolutional Neural Network (BSDCNN) for epileptic seizure prediction.
BSDCNN utilizes 1D convolutional kernels to improve prediction performance.
Overall area under curve, sensitivity, and false prediction rate reaches 0.915, 89.26%, 0.117/h and 0.970, 94.69%, 0.095/h on American Epilepsy Society Seizure Prediction Challenge dataset and the CHB-MIT one respectively.
arXiv Detail & Related papers (2022-06-08T09:27:37Z) - An End-to-End Deep Learning Approach for Epileptic Seizure Prediction [4.094649684498489]
We propose an end-to-end deep learning solution using a convolutional neural network (CNN)
Overall sensitivity, false prediction rate, and area under receiver operating characteristic curve reaches 93.5%, 0.063/h, 0.981 and 98.8%, 0.074/h, 0.988 on two datasets respectively.
arXiv Detail & Related papers (2021-08-17T05:49:43Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - A Linearly Convergent Algorithm for Decentralized Optimization: Sending
Less Bits for Free! [72.31332210635524]
Decentralized optimization methods enable on-device training of machine learning models without a central coordinator.
We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators.
We prove that our method can solve the problems without any increase in the number of communications compared to the baseline.
arXiv Detail & Related papers (2020-11-03T13:35:53Z) - Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution [71.29848468762789]
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
arXiv Detail & Related papers (2020-10-22T02:34:33Z) - On Compression Principle and Bayesian Optimization for Neural Networks [0.0]
We propose a compression principle that states that an optimal predictive model is the one that minimizes a total compressed message length of all data and model definition while guarantees decodability.
We show that dropout can be used for a continuous dimensionality reduction that allows to find optimal network dimensions as required by the compression principle.
arXiv Detail & Related papers (2020-06-23T03:23:47Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - Back-and-Forth prediction for deep tensor compression [37.663819283148854]
We present a prediction scheme called Back-and-Forth (BaF) prediction, developed for deep feature tensors.
We achieve a 62% and 75% reduction in tensor size while keeping the loss in accuracy of the network to less than 1% and 2%.
arXiv Detail & Related papers (2020-02-14T01:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.