Learning Cocoercive Conservative Denoisers via Helmholtz Decomposition for Poisson Inverse Problems
- URL: http://arxiv.org/abs/2505.08909v1
- Date: Tue, 13 May 2025 19:00:55 GMT
- Title: Learning Cocoercive Conservative Denoisers via Helmholtz Decomposition for Poisson Inverse Problems
- Authors: Deliang Wei, Peng Chen, Haobo Xu, Jiale Yao, Fang Li, Tieyong Zeng,
- Abstract summary: We propose a cocoercive conservative (CoCo) denoiser, which may be (residual) expansive, leading to improved denoising.<n>By leveraging the generalized Helmholtz decomposition, we introduce a novel training strategy that combines Hamiltonian regularization to promote conservativeness.
- Score: 17.861078961765966
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Plug-and-play (PnP) methods with deep denoisers have shown impressive results in imaging problems. They typically require strong convexity or smoothness of the fidelity term and a (residual) non-expansive denoiser for convergence. These assumptions, however, are violated in Poisson inverse problems, and non-expansiveness can hinder denoising performance. To address these challenges, we propose a cocoercive conservative (CoCo) denoiser, which may be (residual) expansive, leading to improved denoising. By leveraging the generalized Helmholtz decomposition, we introduce a novel training strategy that combines Hamiltonian regularization to promote conservativeness and spectral regularization to ensure cocoerciveness. We prove that CoCo denoiser is a proximal operator of a weakly convex function, enabling a restoration model with an implicit weakly convex prior. The global convergence of PnP methods to a stationary point of this restoration model is established. Extensive experimental results demonstrate that our approach outperforms closely related methods in both visual quality and quantitative metrics.
Related papers
- Revisiting Convergence: Shuffling Complexity Beyond Lipschitz Smoothness [50.78508362183774]
Shuffling-type gradient methods are favored in practice for their simplicity and rapid empirical performance.<n>Most require the Lipschitz condition, which is often not met in common machine learning schemes.
arXiv Detail & Related papers (2025-07-11T15:36:48Z) - You KAN Do It in a Single Shot: Plug-and-Play Methods with Single-Instance Priors [10.726369475010818]
We introduce KAN-Play, an optimisation framework that incorporates Kologorov- Networks (KANs) as denoisers.<n>KAN-Play is specifically designed to solve problems with single-instance inverse priors, where only a single noisy observation is available.
arXiv Detail & Related papers (2024-12-09T04:55:18Z) - Learning pseudo-contractive denoisers for inverse problems [5.720034382278817]
Deep denoisers have shown excellent performance in solving inverse problems in signal and image processing.
In order to guarantee the convergence, the denoiser needs to satisfy some Lipschitz conditions like non-expansiveness.
This paper introduces a novel training strategy that enforces a weaker constraint on the deep denoiser called pseudo-contractiveness.
arXiv Detail & Related papers (2024-02-08T12:49:46Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems [8.33626757808923]
We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
arXiv Detail & Related papers (2023-09-17T12:06:04Z) - Learning Weakly Convex Regularizers for Convergent Image-Reconstruction
Algorithms [16.78532039510369]
We propose to learn non-processing regularizers with a bound on their weak energy.
We show that they mimic convexs-regularity-promoting regularizers.
We also show that the learned regularizer can be deployed to solve inverse problems with schemes provably converge.
arXiv Detail & Related papers (2023-08-21T07:52:39Z) - Convergent regularization in inverse problems and linear plug-and-play
denoisers [3.759634359597638]
Plug-and-play () denoising is a popular framework for solving imaging problems using inverse image denoisers.
Not much is known about the properties of the converged solution as the noise level in the measurement vanishes to zero, i.e. whether provably convergent regularization schemes are provably convergent regularization schemes.
We show that with linear denoisers, the implicit regularization of the denoiser to an explicit regularization functional leads to a convergent regularization scheme.
arXiv Detail & Related papers (2023-07-18T17:16:08Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly Detection [80.20339155618612]
DiffusionAD is a novel anomaly detection pipeline comprising a reconstruction sub-network and a segmentation sub-network.<n>A rapid one-step denoising paradigm achieves hundreds of times acceleration while preserving comparable reconstruction quality.<n>Considering the diversity in the manifestation of anomalies, we propose a norm-guided paradigm to integrate the benefits of multiple noise scales.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Interpolation can hurt robust generalization even when there is no noise [76.3492338989419]
We show that avoiding generalization through ridge regularization can significantly improve generalization even in the absence of noise.
We prove this phenomenon for the robust risk of both linear regression and classification and hence provide the first theoretical result on robust overfitting.
arXiv Detail & Related papers (2021-08-05T23:04:15Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.