Efficient Consensus Model based on Proximal Gradient Method applied to
Convolutional Sparse Problems
- URL: http://arxiv.org/abs/2011.10100v1
- Date: Thu, 19 Nov 2020 20:52:48 GMT
- Title: Efficient Consensus Model based on Proximal Gradient Method applied to
Convolutional Sparse Problems
- Authors: Gustavo Silva, Paul Rodriguez
- Abstract summary: We derive and detail a theoretical analysis of an efficient consensus algorithm based on gradient proximal (PG) approach.
The proposed algorithm is also applied to another particular convolutional problem for the anomaly detection task.
- Score: 2.335152769484957
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Convolutional sparse representation (CSR), shift-invariant model for inverse
problems, has gained much attention in the fields of signal/image processing,
machine learning and computer vision. The most challenging problems in CSR
implies the minimization of a composite function of the form $min_x \sum_i
f_i(x) + g(x)$, where a direct and low-cost solution can be difficult to
achieve. However, it has been reported that semi-distributed formulations such
as ADMM consensus can provide important computational benefits. In the present
work, we derive and detail a thorough theoretical analysis of an efficient
consensus algorithm based on proximal gradient (PG) approach. The effectiveness
of the proposed algorithm with respect to its ADMM counterpart is primarily
assessed in the classic convolutional dictionary learning problem. Furthermore,
our consensus method, which is generically structured, can be used to solve
other optimization problems, where a sum of convex functions with a
regularization term share a single global variable. As an example, the proposed
algorithm is also applied to another particular convolutional problem for the
anomaly detection task.
Related papers
- Analysis of the Non-variational Quantum Walk-based Optimisation Algorithm [0.0]
This paper introduces in detail a non-variational quantum algorithm designed to solve a wide range of optimisation problems.
The algorithm returns optimal and near-optimal solutions from repeated preparation and measurement of an amplified state.
arXiv Detail & Related papers (2024-07-29T13:54:28Z) - A unified consensus-based parallel ADMM algorithm for high-dimensional
regression with combined regularizations [3.280169909938912]
parallel alternating multipliers (ADMM) is widely recognized for its effectiveness in handling large-scale distributed datasets.
The proposed algorithms serve to demonstrate the reliability, stability, and scalability of a financial example.
arXiv Detail & Related papers (2023-11-21T03:30:38Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Efficient Alternating Minimization Solvers for Wyner Multi-View
Unsupervised Learning [0.0]
We propose two novel formulations that enable the development of computational efficient solvers based the alternating principle.
The proposed solvers offer computational efficiency, theoretical convergence guarantees, local minima complexity with the number of views, and exceptional accuracy as compared with the state-of-the-art techniques.
arXiv Detail & Related papers (2023-03-28T10:17:51Z) - A proximal-proximal majorization-minimization algorithm for nonconvex
tuning-free robust regression problems [4.261680642170457]
We introduce proximal-proximal majorization-minimization (PPMM) algorithm for non regression problems.
Our proposed algorithm outperforms the existing state-of-the-art algorithms.
arXiv Detail & Related papers (2021-06-25T15:07:13Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - On the implementation of a global optimization method for mixed-variable
problems [0.30458514384586394]
The algorithm is based on the radial basis function of Gutmann and the metric response surface method of Regis and Shoemaker.
We propose several modifications aimed at generalizing and improving these two algorithms.
arXiv Detail & Related papers (2020-09-04T13:36:56Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Polynomial-Time Exact MAP Inference on Discrete Models with Global
Dependencies [83.05591911173332]
junction tree algorithm is the most general solution for exact MAP inference with run-time guarantees.
We propose a new graph transformation technique via node cloning which ensures a run-time for solving our target problem independently of the form of a corresponding clique tree.
arXiv Detail & Related papers (2019-12-27T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.