A unified consensus-based parallel ADMM algorithm for high-dimensional
regression with combined regularizations
- URL: http://arxiv.org/abs/2311.12319v1
- Date: Tue, 21 Nov 2023 03:30:38 GMT
- Title: A unified consensus-based parallel ADMM algorithm for high-dimensional
regression with combined regularizations
- Authors: Xiaofei Wu, Zhimin Zhang, Zhenyu Cui
- Abstract summary: parallel alternating multipliers (ADMM) is widely recognized for its effectiveness in handling large-scale distributed datasets.
The proposed algorithms serve to demonstrate the reliability, stability, and scalability of a financial example.
- Score: 3.280169909938912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The parallel alternating direction method of multipliers (ADMM) algorithm is
widely recognized for its effectiveness in handling large-scale datasets stored
in a distributed manner, making it a popular choice for solving statistical
learning models. However, there is currently limited research on parallel
algorithms specifically designed for high-dimensional regression with combined
(composite) regularization terms. These terms, such as elastic-net, sparse
group lasso, sparse fused lasso, and their nonconvex variants, have gained
significant attention in various fields due to their ability to incorporate
prior information and promote sparsity within specific groups or fused
variables. The scarcity of parallel algorithms for combined regularizations can
be attributed to the inherent nonsmoothness and complexity of these terms, as
well as the absence of closed-form solutions for certain proximal operators
associated with them. In this paper, we propose a unified constrained
optimization formulation based on the consensus problem for these types of
convex and nonconvex regression problems and derive the corresponding parallel
ADMM algorithms. Furthermore, we prove that the proposed algorithm not only has
global convergence but also exhibits linear convergence rate. Extensive
simulation experiments, along with a financial example, serve to demonstrate
the reliability, stability, and scalability of our algorithm. The R package for
implementing the proposed algorithms can be obtained at
https://github.com/xfwu1016/CPADMM.
Related papers
- Regularized Projection Matrix Approximation with Applications to Community Detection [1.3761665705201904]
This paper introduces a regularized projection matrix approximation framework designed to recover cluster information from the affinity matrix.
We investigate three distinct penalty functions, each specifically tailored to address bounded, positive, and sparse scenarios.
Numerical experiments conducted on both synthetic and real-world datasets reveal that our regularized projection matrix approximation approach significantly outperforms state-of-the-art methods in clustering performance.
arXiv Detail & Related papers (2024-05-26T15:18:22Z) - Convergence of Expectation-Maximization Algorithm with Mixed-Integer Optimization [5.319361976450982]
This paper introduces a set of conditions that ensure the convergence of a specific class of EM algorithms.
Our results offer a new analysis technique for iterative algorithms that solve mixed-integer non-linear optimization problems.
arXiv Detail & Related papers (2024-01-31T11:42:46Z) - Distributed Linear Regression with Compositional Covariates [5.085889377571319]
We focus on the distributed sparse penalized linear log-contrast model in massive compositional data.
Two distributed optimization techniques are proposed for solving the two different constrained convex optimization problems.
In the decentralized topology, we introduce a distributed coordinate-wise descent algorithm for obtaining a communication-efficient regularized estimation.
arXiv Detail & Related papers (2023-10-21T11:09:37Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - The Dynamics of Riemannian Robbins-Monro Algorithms [101.29301565229265]
We propose a family of Riemannian algorithms generalizing and extending the seminal approximation framework of Robbins and Monro.
Compared to their Euclidean counterparts, Riemannian algorithms are much less understood due to lack of a global linear structure on the manifold.
We provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes.
arXiv Detail & Related papers (2022-06-14T12:30:11Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Jointly Modeling and Clustering Tensors in High Dimensions [6.072664839782975]
We consider the problem of jointly benchmarking and clustering of tensors.
We propose an efficient high-maximization algorithm that converges geometrically to a neighborhood that is within statistical precision.
arXiv Detail & Related papers (2021-04-15T21:06:16Z) - Slice Sampling for General Completely Random Measures [74.24975039689893]
We present a novel Markov chain Monte Carlo algorithm for posterior inference that adaptively sets the truncation level using auxiliary slice variables.
The efficacy of the proposed algorithm is evaluated on several popular nonparametric models.
arXiv Detail & Related papers (2020-06-24T17:53:53Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.