Learnable Descent Algorithm for Nonsmooth Nonconvex Image Reconstruction
- URL: http://arxiv.org/abs/2007.11245v5
- Date: Sat, 3 Sep 2022 10:55:02 GMT
- Title: Learnable Descent Algorithm for Nonsmooth Nonconvex Image Reconstruction
- Authors: Yunmei Chen, Hongcheng Liu, Xiaojing Ye, Qingchao Zhang
- Abstract summary: We propose a general learning based framework for solving nonsmooth non image reconstruction problems.
We show that the proposed is-efficient convergence state-of-the-art methods in an image problems in training.
- Score: 4.2476585678737395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a general learning based framework for solving nonsmooth and
nonconvex image reconstruction problems. We model the regularization function
as the composition of the $l_{2,1}$ norm and a smooth but nonconvex feature
mapping parametrized as a deep convolutional neural network. We develop a
provably convergent descent-type algorithm to solve the nonsmooth nonconvex
minimization problem by leveraging the Nesterov's smoothing technique and the
idea of residual learning, and learn the network parameters such that the
outputs of the algorithm match the references in training data. Our method is
versatile as one can employ various modern network structures into the
regularization, and the resulting network inherits the guaranteed convergence
of the algorithm. We also show that the proposed network is parameter-efficient
and its performance compares favorably to the state-of-the-art methods in a
variety of image reconstruction problems in practice.
Related papers
- A Primal-dual algorithm for image reconstruction with ICNNs [3.4797100095791706]
We address the optimization problem in a data-driven variational framework, where the regularizer is parameterized by an input- neural network (ICNN)
While gradient-based methods are commonly used to solve such problems, they struggle to effectively handle nonsmoothness.
We show that a proposed approach outperforms subgradient methods in terms of both speed and stability.
arXiv Detail & Related papers (2024-10-16T10:36:29Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Unfolded proximal neural networks for robust image Gaussian denoising [7.018591019975253]
We propose a unified framework to build PNNs for the Gaussian denoising task, based on both the dual-FB and the primal-dual Chambolle-Pock algorithms.
We also show that accelerated versions of these algorithms enable skip connections in the associated NN layers.
arXiv Detail & Related papers (2023-08-06T15:32:16Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - An Inexact Augmented Lagrangian Algorithm for Training Leaky ReLU Neural
Network with Group Sparsity [13.27709100571336]
A leaky ReLU network with a group regularization term has been widely used in the recent years.
We show that there is a lack of approaches to compute a stationary point deterministically.
We propose an inexact augmented Lagrangian algorithm for solving the new model.
arXiv Detail & Related papers (2022-05-11T11:53:15Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - A Residual Solver and Its Unfolding Neural Network for Total Variation
Regularized Models [5.9622541907827875]
This paper proposes to solve the Total Variation regularized models by finding the residual between the input and the unknown optimal solution.
We numerically confirm that the residual solver can reach the same global optimal solutions as the classical method on 500 natural images.
Both the proposed algorithm and neural network are successfully applied on several problems to demonstrate their effectiveness and efficiency.
arXiv Detail & Related papers (2020-09-08T01:44:34Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural
Networks: an Exact Characterization of the Optimal Solutions [51.60996023961886]
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints.
Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces.
arXiv Detail & Related papers (2020-06-10T15:38:30Z) - A Novel Learnable Gradient Descent Type Algorithm for Non-convex
Non-smooth Inverse Problems [3.888272676868008]
We propose a novel type to solve inverse problems consisting general architecture and neural intimating.
Results that the proposed network outperforms the state reconstruction methods on different image problems in terms of efficiency and results.
arXiv Detail & Related papers (2020-03-15T03:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.