Enhancing convolutional neural network generalizability via low-rank weight approximation
- URL: http://arxiv.org/abs/2209.12715v2
- Date: Thu, 1 Aug 2024 13:53:43 GMT
- Title: Enhancing convolutional neural network generalizability via low-rank weight approximation
- Authors: Chenyin Gao, Shu Yang, Anru R. Zhang,
- Abstract summary: Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
- Score: 6.763245393373041
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Noise is ubiquitous during image acquisition. Sufficient denoising is often an important first step for image processing. In recent decades, deep neural networks (DNNs) have been widely used for image denoising. Most DNN-based image denoising methods require a large-scale dataset or focus on supervised settings, in which single/pairs of clean images or a set of noisy images are required. This poses a significant burden on the image acquisition process. Moreover, denoisers trained on datasets of limited scale may incur over-fitting. To mitigate these issues, we introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation. With the proposed design, we are able to characterize our denoiser with fewer parameters and train it based on a single image, which considerably improves the model's generalizability and reduces the cost of data acquisition. Extensive experiments on both synthetic and real-world noisy images have been conducted. Empirical results show that our proposed method outperforms existing non-learning-based methods (e.g., low-pass filter, non-local mean), single-image unsupervised denoisers (e.g., DIP, NN+BM3D) evaluated on both in-sample and out-sample datasets. The proposed method even achieves comparable performances with some supervised methods (e.g., DnCNN).
Related papers
- Self-Calibrated Variance-Stabilizing Transformations for Real-World Image Denoising [19.08732222562782]
Supervised deep learning has become the method of choice for image denoising.
We show that, contrary to popular belief, denoising networks specialized in the removal of Gaussian noise can be efficiently leveraged in favor of real-world image denoising.
arXiv Detail & Related papers (2024-07-24T16:23:46Z) - Robust Deep Ensemble Method for Real-world Image Denoising [62.099271330458066]
We propose a simple yet effective Bayesian deep ensemble (BDE) method for real-world image denoising.
Our BDE achieves +0.28dB PSNR gain over the state-of-the-art denoising method.
Our BDE can be extended to other image restoration tasks, and achieves +0.30dB, +0.18dB and +0.12dB PSNR gains on benchmark datasets.
arXiv Detail & Related papers (2022-06-08T06:19:30Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Image Denoising using Attention-Residual Convolutional Neural Networks [0.0]
We propose a new learning-based non-blind denoising technique named Attention Residual Convolutional Neural Network (ARCNN) and its extension to blind denoising named Flexible Attention Residual Convolutional Neural Network (FARCNN)
ARCNN achieved an overall average PSNR results of around 0.44dB and 0.96dB for Gaussian and Poisson denoising, respectively FARCNN presented very consistent results, even with slightly worsen performance compared to ARCNN.
arXiv Detail & Related papers (2021-01-19T16:37:57Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Improving Blind Spot Denoising for Microscopy [73.94017852757413]
We present a novel way to improve the quality of self-supervised denoising.
We assume the clean image to be the result of a convolution with a point spread function (PSF) and explicitly include this operation at the end of our neural network.
arXiv Detail & Related papers (2020-08-19T13:06:24Z) - Noise2Inverse: Self-supervised deep convolutional denoising for
tomography [0.0]
Noise2Inverse is a deep CNN-based denoising method for linear image reconstruction algorithms.
We develop a theoretical framework which shows that such training indeed obtains a denoising CNN.
On simulated CT datasets, Noise2Inverse demonstrates an improvement in peak signal-to-noise ratio and structural similarity index.
arXiv Detail & Related papers (2020-01-31T12:50:24Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.