Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems
- URL: http://arxiv.org/abs/2309.09250v1
- Date: Sun, 17 Sep 2023 12:06:04 GMT
- Title: Convex Latent-Optimized Adversarial Regularizers for Imaging Inverse
Problems
- Authors: Huayu Wang, Chen Luo, Taofeng Xie, Qiyu Jin, Guoqing Chen, Zhuo-Xu
Cui, Dong Liang
- Abstract summary: We introduce Convex Latent-d Adrial Regularizers (CLEAR), a novel and interpretable data-driven paradigm.
CLEAR represents a fusion of deep learning (DL) and variational regularization.
Our method consistently outperforms conventional data-driven techniques and traditional regularization approaches.
- Score: 8.33626757808923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, data-driven techniques have demonstrated remarkable effectiveness
in addressing challenges related to MR imaging inverse problems. However, these
methods still exhibit certain limitations in terms of interpretability and
robustness. In response, we introduce Convex Latent-Optimized Adversarial
Regularizers (CLEAR), a novel and interpretable data-driven paradigm. CLEAR
represents a fusion of deep learning (DL) and variational regularization.
Specifically, we employ a latent optimization technique to adversarially train
an input convex neural network, and its set of minima can fully represent the
real data manifold. We utilize it as a convex regularizer to formulate a
CLEAR-informed variational regularization model that guides the solution of the
imaging inverse problem on the real data manifold. Leveraging its inherent
convexity, we have established the convergence of the projected subgradient
descent algorithm for the CLEAR-informed regularization model. This convergence
guarantees the attainment of a unique solution to the imaging inverse problem,
subject to certain assumptions. Furthermore, we have demonstrated the
robustness of our CLEAR-informed model, explicitly showcasing its capacity to
achieve stable reconstruction even in the presence of measurement interference.
Finally, we illustrate the superiority of our approach using MRI reconstruction
as an example. Our method consistently outperforms conventional data-driven
techniques and traditional regularization approaches, excelling in both
reconstruction quality and robustness.
Related papers
- Coverage-Validity-Aware Algorithmic Recourse [23.643366441803796]
We propose a novel framework to generate a model-agnostic recourse that exhibits robustness to model shifts.
Our framework first builds a coverage-validity-aware linear surrogate of the nonlinear (black-box) model.
We show that our surrogate pushes the approximate hyperplane intuitively, facilitating not only robust but also interpretable recourses.
arXiv Detail & Related papers (2023-11-19T15:21:49Z) - Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency [7.671153315762146]
Training diffusion models in the pixel space are both data-intensive and computationally demanding.
Latent diffusion models, which operate in a much lower-dimensional space, offer a solution to these challenges.
We propose textitReSample, an algorithm that can solve general inverse problems with pre-trained latent diffusion models.
arXiv Detail & Related papers (2023-07-16T18:42:01Z) - Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Deep unfolding as iterative regularization for imaging inverse problems [6.485466095579992]
Deep unfolding methods guide the design of deep neural networks (DNNs) through iterative algorithms.
We prove that the unfolded DNN will converge to it stably.
We demonstrate with an example of MRI reconstruction that the proposed method outperforms conventional unfolding methods.
arXiv Detail & Related papers (2022-11-24T07:38:47Z) - Stable Deep MRI Reconstruction using Generative Priors [13.400444194036101]
We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only.
The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods.
arXiv Detail & Related papers (2022-10-25T08:34:29Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Learned convex regularizers for inverse problems [3.294199808987679]
We propose to learn a data-adaptive input- neural network (ICNN) as a regularizer for inverse problems.
We prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations.
We show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.
arXiv Detail & Related papers (2020-08-06T18:58:35Z) - Total Deep Variation for Linear Inverse Problems [71.90933869570914]
We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
arXiv Detail & Related papers (2020-01-14T19:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.