Barzilai and Borwein conjugate gradient method equipped with a
non-monotone line search technique and its application on non-negative matrix
factorization
- URL: http://arxiv.org/abs/2109.05685v1
- Date: Mon, 13 Sep 2021 03:35:36 GMT
- Title: Barzilai and Borwein conjugate gradient method equipped with a
non-monotone line search technique and its application on non-negative matrix
factorization
- Authors: Sajad Fathi Hafshejani, Daya Gaur, Shahadat Hossain, Robert Benkoczi
- Abstract summary: We first modify the non-monotone line search method by introducing a new trigonometric function to calculate the non-monotone parameter.
Under some suitable assumptions, we prove that the new algorithm has the global convergence property.
The efficiency and effectiveness of the proposed method are determined in practice by applying the algorithm to some standard test problems and non-negative matrix factorization problems.
- Score: 1.6058099298620423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a new non-monotone conjugate gradient method for
solving unconstrained nonlinear optimization problems. We first modify the
non-monotone line search method by introducing a new trigonometric function to
calculate the non-monotone parameter, which plays an essential role in the
algorithm's efficiency. Then, we apply a convex combination of the
Barzilai-Borwein method for calculating the value of step size in each
iteration. Under some suitable assumptions, we prove that the new algorithm has
the global convergence property. The efficiency and effectiveness of the
proposed method are determined in practice by applying the algorithm to some
standard test problems and non-negative matrix factorization problems.
Related papers
- Efficient Low-rank Identification via Accelerated Iteratively Reweighted Nuclear Norm Minimization [8.879403568685499]
We introduce an adaptive updating strategy for smoothing parameters.
This behavior transforms the algorithm into one that effectively solves problems after a few iterations.
We prove the global proposed experiment, guaranteeing that every iteration is a critical one.
arXiv Detail & Related papers (2024-06-22T02:37:13Z) - Neural incomplete factorization: learning preconditioners for the conjugate gradient method [2.899792823251184]
We develop a data-driven approach to accelerate the generation of effective preconditioners.
We replace the typically hand-engineered preconditioners by the output of graph neural networks.
Our method generates an incomplete factorization of the matrix and is, therefore, referred to as neural incomplete factorization (NeuralIF)
arXiv Detail & Related papers (2023-05-25T11:45:46Z) - A New Inexact Proximal Linear Algorithm with Adaptive Stopping Criteria
for Robust Phase Retrieval [6.407536646154451]
This paper considers the robust retrieval problem, which can be cast as a nonsmooth and non optimization problem.
We propose a new inexact proximal linear algorithm with the subproblem being solved in two contributions.
arXiv Detail & Related papers (2023-04-25T02:29:33Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Automated differential equation solver based on the parametric
approximation optimization [77.34726150561087]
The article presents a method that uses an optimization algorithm to obtain a solution using the parameterized approximation.
It allows solving the wide class of equations in an automated manner without the algorithm's parameters change.
arXiv Detail & Related papers (2022-05-11T10:06:47Z) - Converting ADMM to a Proximal Gradient for Convex Optimization Problems [4.56877715768796]
In sparse estimation, such as fused lasso and convex clustering, we apply either the proximal gradient method or the alternating direction method of multipliers (ADMM) to solve the problem.
This paper proposes a general method for converting the ADMM solution to the proximal gradient method, assuming that the constraints and objectives are strongly convex.
We show by numerical experiments that we can obtain a significant improvement in terms of efficiency.
arXiv Detail & Related papers (2021-04-22T07:41:12Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - A Block Coordinate Descent-based Projected Gradient Algorithm for
Orthogonal Non-negative Matrix Factorization [0.0]
This article utilizes the projected gradient method (PG) for a non-negative matrix factorization problem (NMF)
We penalise the orthonormality constraints and apply the PG method via a block coordinate descent approach.
arXiv Detail & Related papers (2020-03-23T13:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.