Orthogonal Features-based EEG Signal Denoising using Fractionally
Compressed AutoEncoder
- URL: http://arxiv.org/abs/2102.08083v1
- Date: Tue, 16 Feb 2021 11:15:00 GMT
- Title: Orthogonal Features-based EEG Signal Denoising using Fractionally
Compressed AutoEncoder
- Authors: Subham Nagar, Ahlad Kumar, M.N.S. Swamy
- Abstract summary: A fractional-based compressed auto-encoder architecture has been introduced to solve the problem of denoising electroencephalogram (EEG) signals.
The proposed architecture provides improved denoising results on the standard datasets when compared with the existing methods.
- Score: 16.889633963766858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fractional-based compressed auto-encoder architecture has been introduced
to solve the problem of denoising electroencephalogram (EEG) signals. The
architecture makes use of fractional calculus to calculate the gradients during
the backpropagation process, as a result of which a new hyper-parameter in the
form of fractional order ($\alpha$) has been introduced which can be tuned to
get the best denoising performance. Additionally, to avoid substantial use of
memory resources, the model makes use of orthogonal features in the form of
Tchebichef moments as input. The orthogonal features have been used in
achieving compression at the input stage. Considering the growing use of low
energy devices, compression of neural networks becomes imperative. Here, the
auto-encoder's weights are compressed using the randomized singular value
decomposition (RSVD) algorithm during training while evaluation is performed
using various compression ratios. The experimental results show that the
proposed fractionally compressed architecture provides improved denoising
results on the standard datasets when compared with the existing methods.
Related papers
- Approximately Invertible Neural Network for Learned Image Compression [19.330720001489937]
This paper proposes an Approximately Invertible Neural Network (A-INN) framework for learned image compression.
It formulates the rate-distortion optimization in lossy image compression when using INN with quantization.
Extensive experiments demonstrate that the proposed A-INN outperforms the existing learned image compression methods.
arXiv Detail & Related papers (2024-08-30T07:57:47Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Compression with Bayesian Implicit Neural Representations [16.593537431810237]
We propose overfitting variational neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding instead of quantizing and entropy coding it.
Experiments show that our method achieves strong performance on image and audio compression while retaining simplicity.
arXiv Detail & Related papers (2023-05-30T16:29:52Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - Neural Estimation of the Rate-Distortion Function With Applications to
Operational Source Coding [25.59334941818991]
A fundamental question in designing lossy data compression schemes is how well one can do in comparison with the rate-distortion function.
We investigate methods to estimate the rate-distortion function on large, real-world data.
We apply the resulting rate-distortion estimator, called NERD, on popular image datasets.
arXiv Detail & Related papers (2022-04-04T16:06:40Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Communication-Efficient Federated Learning via Quantized Compressed
Sensing [82.10695943017907]
The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server.
Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression.
We demonstrate that the framework achieves almost identical performance with the case that performs no compression.
arXiv Detail & Related papers (2021-11-30T02:13:54Z) - Substitutional Neural Image Compression [48.20906717052056]
Substitutional Neural Image Compression (SNIC) is a general approach for enhancing any neural image compression model.
It boosts compression performance toward a flexible distortion metric and enables bit-rate control using a single model instance.
arXiv Detail & Related papers (2021-05-16T20:53:31Z) - Orthogonal Features Based EEG Signals Denoising Using Fractional and
Compressed One-Dimensional CNN AutoEncoder [3.8580784887142774]
This paper presents a fractional one-dimensional convolutional neural network (CNN) autoencoder for denoising the Electroencephalogram (EEG) signals.
EEG signals often get contaminated with noise during the recording process, mostly due to muscle artifacts (MA)
arXiv Detail & Related papers (2021-04-16T13:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.