A Novel Learnable Gradient Descent Type Algorithm for Non-convex
Non-smooth Inverse Problems
- URL: http://arxiv.org/abs/2003.06748v2
- Date: Tue, 24 Mar 2020 23:39:46 GMT
- Title: A Novel Learnable Gradient Descent Type Algorithm for Non-convex
Non-smooth Inverse Problems
- Authors: Qingchao Zhang, Xiaojing Ye, Hongcheng Liu, and Yunmei Chen
- Abstract summary: We propose a novel type to solve inverse problems consisting general architecture and neural intimating.
Results that the proposed network outperforms the state reconstruction methods on different image problems in terms of efficiency and results.
- Score: 3.888272676868008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimization algorithms for solving nonconvex inverse problem have attracted
significant interests recently. However, existing methods require the nonconvex
regularization to be smooth or simple to ensure convergence. In this paper, we
propose a novel gradient descent type algorithm, by leveraging the idea of
residual learning and Nesterov's smoothing technique, to solve inverse problems
consisting of general nonconvex and nonsmooth regularization with provable
convergence. Moreover, we develop a neural network architecture intimating this
algorithm to learn the nonlinear sparsity transformation adaptively from
training data, which also inherits the convergence to accommodate the general
nonconvex structure of this learned transformation. Numerical results
demonstrate that the proposed network outperforms the state-of-the-art methods
on a variety of different image reconstruction problems in terms of efficiency
and accuracy.
Related papers
- A Primal-dual algorithm for image reconstruction with ICNNs [3.4797100095791706]
We address the optimization problem in a data-driven variational framework, where the regularizer is parameterized by an input- neural network (ICNN)
While gradient-based methods are commonly used to solve such problems, they struggle to effectively handle nonsmoothness.
We show that a proposed approach outperforms subgradient methods in terms of both speed and stability.
arXiv Detail & Related papers (2024-10-16T10:36:29Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Deep unfolding as iterative regularization for imaging inverse problems [6.485466095579992]
Deep unfolding methods guide the design of deep neural networks (DNNs) through iterative algorithms.
We prove that the unfolded DNN will converge to it stably.
We demonstrate with an example of MRI reconstruction that the proposed method outperforms conventional unfolding methods.
arXiv Detail & Related papers (2022-11-24T07:38:47Z) - An Inexact Augmented Lagrangian Algorithm for Training Leaky ReLU Neural
Network with Group Sparsity [13.27709100571336]
A leaky ReLU network with a group regularization term has been widely used in the recent years.
We show that there is a lack of approaches to compute a stationary point deterministically.
We propose an inexact augmented Lagrangian algorithm for solving the new model.
arXiv Detail & Related papers (2022-05-11T11:53:15Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z) - Learnable Descent Algorithm for Nonsmooth Nonconvex Image Reconstruction [4.2476585678737395]
We propose a general learning based framework for solving nonsmooth non image reconstruction problems.
We show that the proposed is-efficient convergence state-of-the-art methods in an image problems in training.
arXiv Detail & Related papers (2020-07-22T07:59:07Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.