Multiscale Sparsifying Transform Learning for Image Denoising
- URL: http://arxiv.org/abs/2003.11265v5
- Date: Sun, 25 Jul 2021 18:16:20 GMT
- Title: Multiscale Sparsifying Transform Learning for Image Denoising
- Authors: Ashkan Abbasi, Amirhassan Monadjemi, Leyuan Fang, Hossein Rabbani,
Neda Noormohammadi, Yi Zhang
- Abstract summary: We show that an efficient multiscale method can be devised without the need for denoising detail subbands.
We analyze and assess the studied methods thoroughly and compare them with the well-known and state-of-the-art methods.
- Score: 24.04866867707783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The data-driven sparse methods such as synthesis dictionary learning (e.g.,
K-SVD) and sparsifying transform learning have been proven effective in image
denoising. However, they are intrinsically single-scale which can lead to
suboptimal results. We propose two methods developed based on wavelet subbands
mixing to efficiently combine the merits of both single and multiscale methods.
We show that an efficient multiscale method can be devised without the need for
denoising detail subbands which substantially reduces the runtime. The proposed
methods are initially derived within the framework of sparsifying transform
learning denoising, and then, they are generalized to propose our multiscale
extensions for the well-known K-SVD and SAIST image denoising methods. We
analyze and assess the studied methods thoroughly and compare them with the
well-known and state-of-the-art methods. The experiments show that our methods
are able to offer good trade-offs between performance and complexity.
Related papers
- One-Step Diffusion Model for Image Motion-Deblurring [85.76149042561507]
We propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step.
To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration.
Our method achieves strong performance on both full and no-reference metrics.
arXiv Detail & Related papers (2025-03-09T09:39:57Z) - Fast, Accurate Manifold Denoising by Tunneling Riemannian Optimization [4.597774455074727]
We consider the problem of efficiently denoising a noisy new data point sampled from an unknown $d$-dimensional manifold $M in mathbbRD$, using only noisy samples.
This work proposes a framework for test-time efficient manifold denoising, by framing the concept of "learning-to-denoise" as "learning-to-optimize"
arXiv Detail & Related papers (2025-02-24T04:02:16Z) - Self-Calibrated Variance-Stabilizing Transformations for Real-World Image Denoising [19.08732222562782]
Supervised deep learning has become the method of choice for image denoising.
We show that, contrary to popular belief, denoising networks specialized in the removal of Gaussian noise can be efficiently leveraged in favor of real-world image denoising.
arXiv Detail & Related papers (2024-07-24T16:23:46Z) - A Comparison of Image Denoising Methods [23.69991964391047]
We compare a variety of denoising methods on both synthetic and real-world datasets for different applications.
We show that a simple matrix-based algorithm may be able to produce similar results compared with its tensor counterparts.
In spite of the progress in recent years, we discuss shortcomings and possible extensions of existing techniques.
arXiv Detail & Related papers (2023-04-18T13:41:42Z) - Linear Combinations of Patches are Unreasonably Effective for Single-Image Denoising [5.893124686141782]
Deep neural networks have revolutionized image denoising in achieving significant accuracy improvements.
To alleviate the requirement to learn image priors externally, single-image methods perform denoising solely based on the analysis of the input noisy image.
This work investigates the effectiveness of linear combinations of patches for denoising under this constraint.
arXiv Detail & Related papers (2022-12-01T10:52:03Z) - Weighted Ensemble Self-Supervised Learning [67.24482854208783]
Ensembling has proven to be a powerful technique for boosting model performance.
We develop a framework that permits data-dependent weighted cross-entropy losses.
Our method outperforms both in multiple evaluation metrics on ImageNet-1K.
arXiv Detail & Related papers (2022-11-18T02:00:17Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - Robust Imitation Learning from Noisy Demonstrations [81.67837507534001]
We show that robust imitation learning can be achieved by optimizing a classification risk with a symmetric loss.
We propose a new imitation learning method that effectively combines pseudo-labeling with co-training.
Experimental results on continuous-control benchmarks show that our method is more robust compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-20T10:41:37Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z) - Noise2Inverse: Self-supervised deep convolutional denoising for
tomography [0.0]
Noise2Inverse is a deep CNN-based denoising method for linear image reconstruction algorithms.
We develop a theoretical framework which shows that such training indeed obtains a denoising CNN.
On simulated CT datasets, Noise2Inverse demonstrates an improvement in peak signal-to-noise ratio and structural similarity index.
arXiv Detail & Related papers (2020-01-31T12:50:24Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.