A Compound Gaussian Least Squares Algorithm and Unrolled Network for
Linear Inverse Problems
- URL: http://arxiv.org/abs/2305.11120v3
- Date: Tue, 28 Nov 2023 21:53:04 GMT
- Title: A Compound Gaussian Least Squares Algorithm and Unrolled Network for
Linear Inverse Problems
- Authors: Carter Lyons, Raghu G. Raj, and Margaret Cheney
- Abstract summary: This paper develops two new approaches to solving linear inverse problems.
The first is an iterative algorithm that minimizes a regularized least squares objective function.
The second is a deep neural network that corresponds to an "unrolling" or "unfolding" of the iterative algorithm.
- Score: 1.283555556182245
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: For solving linear inverse problems, particularly of the type that appears in
tomographic imaging and compressive sensing, this paper develops two new
approaches. The first approach is an iterative algorithm that minimizes a
regularized least squares objective function where the regularization is based
on a compound Gaussian prior distribution. The compound Gaussian prior subsumes
many of the commonly used priors in image reconstruction, including those of
sparsity-based approaches. The developed iterative algorithm gives rise to the
paper's second new approach, which is a deep neural network that corresponds to
an "unrolling" or "unfolding" of the iterative algorithm. Unrolled deep neural
networks have interpretable layers and outperform standard deep learning
methods. This paper includes a detailed computational theory that provides
insight into the construction and performance of both algorithms. The
conclusion is that both algorithms outperform other state-of-the-art approaches
to tomographic image formation and compressive sensing, especially in the
difficult regime of low training.
Related papers
- Deep Convolutional Neural Networks Meet Variational Shape Compactness Priors for Image Segmentation [7.314877483509877]
Shape compactness is a key geometrical property to describe interesting regions in many image segmentation tasks.
We propose two novel algorithms to solve the introduced image segmentation problem that incorporates a shape-compactness prior.
The proposed algorithms significantly improve IoU by 20% training on a highly noisy image dataset.
arXiv Detail & Related papers (2024-05-23T11:05:35Z) - Deep Regularized Compound Gaussian Network for Solving Linear Inverse Problems [1.283555556182245]
We devise two novel approaches for linear inverse problems that permit problem-specific statistical prior selections.
The first method is an iterative algorithm that minimizes a regularized least squares objective function.
The second method is a novel deep regularized (DR) neural network, called DR-CG-Net, that learns the prior information.
arXiv Detail & Related papers (2023-11-28T21:53:57Z) - Unfolded proximal neural networks for robust image Gaussian denoising [7.018591019975253]
We propose a unified framework to build PNNs for the Gaussian denoising task, based on both the dual-FB and the primal-dual Chambolle-Pock algorithms.
We also show that accelerated versions of these algorithms enable skip connections in the associated NN layers.
arXiv Detail & Related papers (2023-08-06T15:32:16Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Regularized Training of Intermediate Layers for Generative Models for
Inverse Problems [9.577509224534323]
We introduce a principle that if a generative model is intended for inversion using an algorithm based on optimization of intermediate layers, it should be trained in a way that regularizes those intermediate layers.
We instantiate this principle for two notable recent inversion algorithms: Intermediate Layer Optimization and the Multi-Code GAN prior.
For both of these inversion algorithms, we introduce a new regularized GAN training algorithm and demonstrate that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios.
arXiv Detail & Related papers (2022-03-08T20:30:49Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - SONIA: A Symmetric Blockwise Truncated Optimization Algorithm [2.9923891863939938]
This work presents a new algorithm for empirical risk.
The algorithm bridges the gap between first- and second-order search methods by computing a second-order search-type update in one subspace, coupled with a scaled steepest descent step in the Theoretical complement.
arXiv Detail & Related papers (2020-06-06T19:28:14Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.