Learning pseudo-contractive denoisers for inverse problems
- URL: http://arxiv.org/abs/2402.05637v1
- Date: Thu, 8 Feb 2024 12:49:46 GMT
- Title: Learning pseudo-contractive denoisers for inverse problems
- Authors: Deliang Wei, Peng Chen, Fang Li
- Abstract summary: Deep denoisers have shown excellent performance in solving inverse problems in signal and image processing.
In order to guarantee the convergence, the denoiser needs to satisfy some Lipschitz conditions like non-expansiveness.
This paper introduces a novel training strategy that enforces a weaker constraint on the deep denoiser called pseudo-contractiveness.
- Score: 5.720034382278817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep denoisers have shown excellent performance in solving inverse problems
in signal and image processing. In order to guarantee the convergence, the
denoiser needs to satisfy some Lipschitz conditions like non-expansiveness.
However, enforcing such constraints inevitably compromises recovery
performance. This paper introduces a novel training strategy that enforces a
weaker constraint on the deep denoiser called pseudo-contractiveness. By
studying the spectrum of the Jacobian matrix, relationships between different
denoiser assumptions are revealed. Effective algorithms based on gradient
descent and Ishikawa process are derived, and further assumptions of strict
pseudo-contractiveness yield efficient algorithms using half-quadratic
splitting and forward-backward splitting. The proposed algorithms theoretically
converge strongly to a fixed point. A training strategy based on holomorphic
transformation and functional calculi is proposed to enforce the
pseudo-contractive denoiser assumption. Extensive experiments demonstrate
superior performance of the pseudo-contractive denoiser compared to related
denoisers. The proposed methods are competitive in terms of visual effects and
quantitative values.
Related papers
- Revisiting Convergence: Shuffling Complexity Beyond Lipschitz Smoothness [50.78508362183774]
Shuffling-type gradient methods are favored in practice for their simplicity and rapid empirical performance.<n>Most require the Lipschitz condition, which is often not met in common machine learning schemes.
arXiv Detail & Related papers (2025-07-11T15:36:48Z) - Learning Cocoercive Conservative Denoisers via Helmholtz Decomposition for Poisson Inverse Problems [17.861078961765966]
We propose a cocoercive conservative (CoCo) denoiser, which may be (residual) expansive, leading to improved denoising.<n>By leveraging the generalized Helmholtz decomposition, we introduce a novel training strategy that combines Hamiltonian regularization to promote conservativeness.
arXiv Detail & Related papers (2025-05-13T19:00:55Z) - Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints [49.76332265680669]
This paper examines a crucial subset of problems where both the objective and constraint functions are weakly convex.
Existing methods often face limitations, including slow convergence rates or reliance on double-loop designs.
We introduce a novel single-loop penalty-based algorithm to overcome these challenges.
arXiv Detail & Related papers (2025-04-21T17:15:48Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Plug-and-Play image restoration with Stochastic deNOising REgularization [8.678250057211368]
We propose a new framework called deNOising REgularization (SNORE)
SNORE applies the denoiser only to images with noise of the adequate level.
It is based on an explicit regularization, which leads to a descent to solve inverse problems.
arXiv Detail & Related papers (2024-02-01T18:05:47Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Rethinking SIGN Training: Provable Nonconvex Acceleration without First-
and Second-Order Gradient Lipschitz [66.22095739795068]
Sign-based methods have gained attention due to their ability to achieve robust performance despite only using only the sign information for parameter updates.
The current convergence analysis of sign-based methods relies on the strong assumptions of first-order acceleration and second-order acceleration.
In this paper we analyze their convergence under more realistic assumptions of first- and second-order acceleration.
arXiv Detail & Related papers (2023-10-23T06:48:43Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Unsupervised Image Denoising in Real-World Scenarios via
Self-Collaboration Parallel Generative Adversarial Branches [28.61750072026107]
Deep learning methods have shown remarkable performance in image denoising, particularly when trained on large-scale paired datasets.
Deep learning methods have shown remarkable performance in image denoising, particularly when trained on large-scale paired datasets.
However, acquiring such paired datasets for real-world scenarios poses a significant challenge.
arXiv Detail & Related papers (2023-08-13T14:04:46Z) - Convergent regularization in inverse problems and linear plug-and-play
denoisers [3.759634359597638]
Plug-and-play () denoising is a popular framework for solving imaging problems using inverse image denoisers.
Not much is known about the properties of the converged solution as the noise level in the measurement vanishes to zero, i.e. whether provably convergent regularization schemes are provably convergent regularization schemes.
We show that with linear denoisers, the implicit regularization of the denoiser to an explicit regularization functional leads to a convergent regularization scheme.
arXiv Detail & Related papers (2023-07-18T17:16:08Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - Gradient Step Denoiser for convergent Plug-and-Play [5.629161809575015]
Plug-and-Play methods can lead to tremendous visual performance for various image problems.
We propose new type of Plug-and-Play methods, based on half-quadratic descent.
Experiments show that it is possible to learn such a deep denoiser while not compromising the performance.
arXiv Detail & Related papers (2021-10-07T07:11:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.