Convergence Properties of Score-Based Models for Linear Inverse Problems Using Graduated Optimisation
- URL: http://arxiv.org/abs/2404.18699v2
- Date: Mon, 12 Aug 2024 18:12:44 GMT
- Title: Convergence Properties of Score-Based Models for Linear Inverse Problems Using Graduated Optimisation
- Authors: Pascal Fernsel, Ċ½eljko Kereta, Alexander Denker,
- Abstract summary: We show that score-based generative models (SGMs) can be used to solve inverse problems.
We show that we are able to recover high-Ms images, independent of the initial value.
The source is publicly available on GitHub.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The incorporation of generative models as regularisers within variational formulations for inverse problems has proven effective across numerous image reconstruction tasks. However, the resulting optimisation problem is often non-convex and challenging to solve. In this work, we show that score-based generative models (SGMs) can be used in a graduated optimisation framework to solve inverse problems. We show that the resulting graduated non-convexity flow converge to stationary points of the original problem and provide a numerical convergence analysis of a 2D toy example. We further provide experiments on computed tomography image reconstruction, where we show that this framework is able to recover high-quality images, independent of the initial value. The experiments highlight the potential of using SGMs in graduated optimisation frameworks. The source code is publicly available on GitHub.
Related papers
- EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture Models [70.60381055741391]
Image restoration challenges related to illposed problems, resulting in deviations between single model predictions and ground-truths.
Ensemble learning aims to address these deviations by combining the predictions of multiple base models.
We employ an expectation (EM)-based algorithm to estimate ensemble weights for prediction candidates.
Our algorithm is model-agnostic and training-free, allowing seamless integration and enhancement of various pre-trained image restoration models.
arXiv Detail & Related papers (2024-10-30T12:16:35Z) - Iteratively Refined Image Reconstruction with Learned Attentive Regularizers [14.93489065234423]
We propose a regularization scheme for image reconstruction that leverages the power of deep learning.
Our scheme is interpretable because it corresponds to the minimization of a series of convex problems.
We offer a promising balance between interpretability, theoretical guarantees, reliability, and performance.
arXiv Detail & Related papers (2024-07-09T07:22:48Z) - Deep Equilibrium Diffusion Restoration with Parallel Sampling [120.15039525209106]
Diffusion model-based image restoration (IR) aims to use diffusion models to recover high-quality (HQ) images from degraded images, achieving promising performance.
Most existing methods need long serial sampling chains to restore HQ images step-by-step, resulting in expensive sampling time and high computation costs.
In this work, we aim to rethink the diffusion model-based IR models through a different perspective, i.e., a deep equilibrium (DEQ) fixed point system, called DeqIR.
arXiv Detail & Related papers (2023-11-20T08:27:56Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Regularization via deep generative models: an analysis point of view [8.818465117061205]
This paper proposes a new way of regularizing an inverse problem in imaging (e.g., deblurring or inpainting) by means of a deep generative neural network.
In many cases our technique achieves a clear improvement of the performance and seems to be more robust.
arXiv Detail & Related papers (2021-01-21T15:04:57Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - Low-Rank and Total Variation Regularization and Its Application to Image
Recovery [6.288398111817322]
We present an efficient iterative scheme to solve the relaxed problem that essentially employs the (weighted) value thresholding at each iteration.
We perform extensive experiments, showing that the proposed algorithm outperforms state-of-the-art methodologies in recovering images.
arXiv Detail & Related papers (2020-03-12T10:37:49Z) - Total Deep Variation for Linear Inverse Problems [71.90933869570914]
We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
arXiv Detail & Related papers (2020-01-14T19:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.