Unsupervised Neural Universal Denoiser for Finite-Input General-Output
Noisy Channel
- URL: http://arxiv.org/abs/2003.02623v1
- Date: Thu, 5 Mar 2020 17:11:56 GMT
- Title: Unsupervised Neural Universal Denoiser for Finite-Input General-Output
Noisy Channel
- Authors: Tae-Eon Park and Taesup Moon
- Abstract summary: We devise a novel neural network-based universal denoiser for the finite-input, general-output (FIGO) channel.
Based on the assumption of known noisy channel densities, we train the network such that it can denoise as well as the best sliding window denoiser for any given underlying clean source data.
- Score: 25.26787589154647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We devise a novel neural network-based universal denoiser for the
finite-input, general-output (FIGO) channel. Based on the assumption of known
noisy channel densities, which is realistic in many practical scenarios, we
train the network such that it can denoise as well as the best sliding window
denoiser for any given underlying clean source data. Our algorithm, dubbed as
Generalized CUDE (Gen-CUDE), enjoys several desirable properties; it can be
trained in an unsupervised manner (solely based on the noisy observation data),
has much smaller computational complexity compared to the previously developed
universal denoiser for the same setting, and has much tighter upper bound on
the denoising performance, which is obtained by a theoretical analysis. In our
experiments, we show such tighter upper bound is also realized in practice by
showing that Gen-CUDE achieves much better denoising results compared to other
strong baselines for both synthetic and real underlying clean sequences.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Unsupervised Image Denoising in Real-World Scenarios via
Self-Collaboration Parallel Generative Adversarial Branches [28.61750072026107]
Deep learning methods have shown remarkable performance in image denoising, particularly when trained on large-scale paired datasets.
Deep learning methods have shown remarkable performance in image denoising, particularly when trained on large-scale paired datasets.
However, acquiring such paired datasets for real-world scenarios poses a significant challenge.
arXiv Detail & Related papers (2023-08-13T14:04:46Z) - Feature Noise Boosts DNN Generalization under Label Noise [65.36889005555669]
The presence of label noise in the training data has a profound impact on the generalization of deep neural networks (DNNs)
In this study, we introduce and theoretically demonstrate a simple feature noise method, which directly adds noise to the features of training data.
arXiv Detail & Related papers (2023-08-03T08:31:31Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Noise Injection Node Regularization for Robust Learning [0.0]
Noise Injection Node Regularization (NINR) is a method of injecting structured noise into Deep Neural Networks (DNN) during the training stage, resulting in an emergent regularizing effect.
We present theoretical and empirical evidence for substantial improvement in robustness against various test data perturbations for feed-forward DNNs when trained under NINR.
arXiv Detail & Related papers (2022-10-27T20:51:15Z) - Heavy-tailed denoising score matching [5.371337604556311]
We develop an iterative noise scaling algorithm to consistently initialise the multiple levels of noise in Langevin dynamics.
On the practical side, our use of heavy-tailed DSM leads to improved score estimation, controllable sampling convergence, and more balanced unconditional generative performance for imbalanced datasets.
arXiv Detail & Related papers (2021-12-17T22:04:55Z) - Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD [73.55632827932101]
We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
arXiv Detail & Related papers (2021-10-26T15:02:27Z) - Removing Noise from Extracellular Neural Recordings Using Fully
Convolutional Denoising Autoencoders [62.997667081978825]
We propose a Fully Convolutional Denoising Autoencoder, which learns to produce a clean neuronal activity signal from a noisy multichannel input.
The experimental results on simulated data show that our proposed method can improve significantly the quality of noise-corrupted neural signals.
arXiv Detail & Related papers (2021-09-18T14:51:24Z) - Multi-Contextual Design of Convolutional Neural Network for Steganalysis [8.631228373008478]
It is observed that recent steganographic embedding does not always restrict their embedding in the high-frequency zone; instead, they distribute it as per embedding policy.
In this work, unlike the conventional approaches, the proposed model first extracts the noise residual using learned denoising kernels to boost the signal-to-noise ratio.
After preprocessing, the sparse noise residuals are fed to a novel Multi-Contextual Convolutional Neural Network (M-CNET) that uses heterogeneous context size to learn the sparse and low-amplitude representation of noise residuals.
arXiv Detail & Related papers (2021-06-19T05:38:52Z) - Beyond Class-Conditional Assumption: A Primary Attempt to Combat
Instance-Dependent Label Noise [51.66448070984615]
Supervised learning under label noise has seen numerous advances recently.
We present a theoretical hypothesis testing and prove that noise in real-world dataset is unlikely to be CCN.
We formalize an algorithm to generate controllable instance-dependent noise (IDN)
arXiv Detail & Related papers (2020-12-10T05:16:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.