A Framework of Inertial Alternating Direction Method of Multipliers for
Non-Convex Non-Smooth Optimization
- URL: http://arxiv.org/abs/2102.05433v1
- Date: Wed, 10 Feb 2021 13:55:28 GMT
- Title: A Framework of Inertial Alternating Direction Method of Multipliers for
Non-Convex Non-Smooth Optimization
- Authors: Le Thi Khanh Hien, Duy Nhat Phan, Nicolas Gillis
- Abstract summary: We propose an algorithmic framework dubbed alternating methods of multipliers (iADMM) for solving a class of non nonsmooth multiblock composite problems.
Our framework employs the general-major surrogateization (MM) principle to update each block of variables to unify the convergence analysis of previous ADMM schemes.
- Score: 17.553531291690025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an algorithmic framework dubbed inertial
alternating direction methods of multipliers (iADMM), for solving a class of
nonconvex nonsmooth multiblock composite optimization problems with linear
constraints. Our framework employs the general minimization-majorization (MM)
principle to update each block of variables so as to not only unify the
convergence analysis of previous ADMM that use specific surrogate functions in
the MM step, but also lead to new efficient ADMM schemes. To the best of our
knowledge, in the \emph{nonconvex nonsmooth} setting, ADMM used in combination
with the MM principle to update each block of variables, and ADMM combined with
inertial terms for the primal variables have not been studied in the
literature. Under standard assumptions, we prove the subsequential convergence
and global convergence for the generated sequence of iterates. We illustrate
the effectiveness of iADMM on a class of nonconvex low-rank representation
problems.
Related papers
- A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - A General Continuous-Time Formulation of Stochastic ADMM and Its Variants [5.269633789700637]
We introduce a unified algorithmic framework called generalized ADMM.
Our continuous-time analysis provides us with new insights into ADMM and variants.
We prove that under some proper scaling, the trajectory of ADMM weakly converges to the solution of a differential equation with small noise.
arXiv Detail & Related papers (2024-04-22T17:12:58Z) - Optimizing ADMM and Over-Relaxed ADMM Parameters for Linear Quadratic
Problems [32.04687753889809]
Alternating Direction Method of Multipliers (ADMM) has gained significant attention across a broad spectrum of machine learning applications.
We propose a general approach to optimize the value of penalty parameter, followed by a novel closed-form formula to compute the optimal relaxation parameter.
We then experimentally validate our parameter selection methods through random instantiations and diverse imaging applications.
arXiv Detail & Related papers (2024-01-01T04:01:40Z) - Moreau Envelope ADMM for Decentralized Weakly Convex Optimization [55.2289666758254]
This paper proposes a proximal variant of the alternating direction method of multipliers (ADMM) for distributed optimization.
The results of our numerical experiments indicate that our method is faster and more robust than widely-used approaches.
arXiv Detail & Related papers (2023-08-31T14:16:30Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Relational Reasoning via Set Transformers: Provable Efficiency and
Applications to MARL [154.13105285663656]
A cooperative Multi-A gent R einforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in real-world applications.
Unfortunately, the theoretical understanding of this MARL problem is lacking due to the curse of many agents and the limited exploration of the relational reasoning in existing works.
We prove that the suboptimality gaps of the model-free and model-based algorithms are independent of and logarithmic in the number of agents respectively, which mitigates the curse of many agents.
arXiv Detail & Related papers (2022-09-20T16:42:59Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - A Distributed Algorithm for Measure-valued Optimization with Additive
Objective [1.0965065178451106]
We propose a distributed nonvalued algorithm for solving measure-parametric optimization problems with additive objectives.
The proposed algorithm comprises a two-layer alternating direction multipliers (ADMM)
The overall algorithm realizes operator splitting gradient for flows in the manifold of probability measures.
arXiv Detail & Related papers (2022-02-17T23:09:41Z) - Multiblock ADMM for nonsmooth nonconvex optimization with nonlinear
coupling constraints [3.2815423235774634]
We propose a method of multipliers for solving a class of multiblock nonsmooth alternating optimization problem with nonlinear constraints.
We employ a major sequenceization procedure in update of each block of the primal variables.
arXiv Detail & Related papers (2022-01-19T15:31:30Z) - Alternating Direction Method of Multipliers for Quantization [15.62692130672419]
We study the performance of the Alternating Direction Method of Multipliers for Quantization ($texttADMM-Q$) algorithm.
We develop a few variants of $texttADMM-Q$ that can handle inexact update rules.
We empirically evaluate the efficacy of our proposed approaches.
arXiv Detail & Related papers (2020-09-08T01:58:02Z) - Faster Stochastic Alternating Direction Method of Multipliers for
Nonconvex Optimization [110.52708815647613]
In this paper, we propose a faster alternating direction of multipliers (ADMM) for non-integrated optimization by using a new path, called SPADMM.
We prove that the SPADMM achieves a-breaking first-order differential oracle estimator (IFO) for finding a record of an IFO.
Our theoretical analysis shows that the online SPIDER-ADMM has the IFOFO(epsilon) by a factor of $mathcalO(n1)$.
arXiv Detail & Related papers (2020-08-04T02:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.