Robust Time Series Denoising with Learnable Wavelet Packet Transform
- URL: http://arxiv.org/abs/2206.06126v1
- Date: Mon, 13 Jun 2022 13:05:58 GMT
- Title: Robust Time Series Denoising with Learnable Wavelet Packet Transform
- Authors: Gaetan Frusque, Olga Fink
- Abstract summary: In many applications, signal denoising is often the first pre-processing step before any subsequent analysis or learning task.
We propose to apply a deep learning denoising model inspired by a signal processing, a learnable version of wavelet packet transform.
We demonstrate how the proposed algorithm relates to the universality of signal processing methods and the learning capabilities of deep learning approaches.
- Score: 1.370633147306388
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In many applications, signal denoising is often the first pre-processing step
before any subsequent analysis or learning task. In this paper, we propose to
apply a deep learning denoising model inspired by a signal processing, a
learnable version of wavelet packet transform. The proposed algorithm has
signficant learning capabilities with few interpretable parameters and has an
intuitive initialisation. We propose a post-learning modification of the
parameters to adapt the denoising to different noise levels. We evaluate the
performance of the proposed methodology on two case studies and compare it to
other state of the art approaches, including wavelet schrinkage denoising,
convolutional neural network, autoencoder and U-net deep models. The first case
study is based on designed functions that have typically been used to study
denoising properties of the algorithms. The second case study is an audio
background removal task. We demonstrate how the proposed algorithm relates to
the universality of signal processing methods and the learning capabilities of
deep learning approaches. In particular, we evaluate the obtained denoising
performances on structured noisy signals inside and outside the classes used
for training. In addition to having good performance in denoising signals
inside and outside to the training class, our method shows to be particularly
robust when different noise levels, noise types and artifacts are added.
Related papers
- Pivotal Auto-Encoder via Self-Normalizing ReLU [20.76999663290342]
We formalize single hidden layer sparse auto-encoders as a transform learning problem.
We propose an optimization problem that leads to a predictive model invariant to the noise level at test time.
Our experimental results demonstrate that the trained models yield a significant improvement in stability against varying types of noise.
arXiv Detail & Related papers (2024-06-23T09:06:52Z) - Denoising-Aware Contrastive Learning for Noisy Time Series [35.97130925600067]
Time series self-supervised learning (SSL) aims to exploit unlabeled data for pre-training to mitigate the reliance on labels.
We propose denoising-aware contrastive learning (DECL) to mitigate the noise in the representation and automatically selects suitable denoising methods for every sample.
arXiv Detail & Related papers (2024-06-07T04:27:32Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Continuous Modeling of the Denoising Process for Speech Enhancement
Based on Deep Learning [61.787485727134424]
We use a state variable to indicate the denoising process.
A UNet-like neural network learns to estimate every state variable sampled from the continuous denoising process.
Experimental results indicate that preserving a small amount of noise in the clean target benefits speech enhancement.
arXiv Detail & Related papers (2023-09-17T13:27:11Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - Deep Variation Prior: Joint Image Denoising and Noise Variance
Estimation without Clean Data [2.3061446605472558]
This paper investigates the tasks of image denoising and noise variance estimation in a single, joint learning framework.
We build upon DVP, an unsupervised deep learning framework, that simultaneously learns a denoiser and estimates noise variances.
Our method does not require any clean training images or an external step of noise estimation, and instead, approximates the minimum mean squared error denoisers using only a set of noisy images.
arXiv Detail & Related papers (2022-09-19T17:29:32Z) - Geometric and Learning-based Mesh Denoising: A Comprehensive Survey [17.652531757914]
Mesh denoising is a fundamental problem in digital geometry processing.
We provide a review of the advances in mesh denoising, containing both traditional geometric approaches and recent learning-based methods.
arXiv Detail & Related papers (2022-09-02T06:54:32Z) - Improving Pre-trained Language Model Fine-tuning with Noise Stability
Regularization [94.4409074435894]
We propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR)
Specifically, we propose to inject the standard Gaussian noise and regularize hidden representations of the fine-tuned model.
We demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART.
arXiv Detail & Related papers (2022-06-12T04:42:49Z) - Removing Noise from Extracellular Neural Recordings Using Fully
Convolutional Denoising Autoencoders [62.997667081978825]
We propose a Fully Convolutional Denoising Autoencoder, which learns to produce a clean neuronal activity signal from a noisy multichannel input.
The experimental results on simulated data show that our proposed method can improve significantly the quality of noise-corrupted neural signals.
arXiv Detail & Related papers (2021-09-18T14:51:24Z) - Self-Supervised Fast Adaptation for Denoising via Meta-Learning [28.057705167363327]
We propose a new denoising approach that can greatly outperform the state-of-the-art supervised denoising methods.
We show that the proposed method can be easily employed with state-of-the-art denoising networks without additional parameters.
arXiv Detail & Related papers (2020-01-09T09:40:53Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.