An Adam-adjusting-antennae BAS Algorithm for Refining Latent Factors
- URL: http://arxiv.org/abs/2208.06603v1
- Date: Sat, 13 Aug 2022 09:05:05 GMT
- Title: An Adam-adjusting-antennae BAS Algorithm for Refining Latent Factors
- Authors: Yuanyi Liu, Jia Chen and Di Wu
- Abstract summary: We propose a sequential Adam-adjusting-antennae BAS (A2BAS) optimization algorithm, which refines the latent factors obtained by the PSO-incorporated LFA model.
With experimental results on two real high-dimensional matrices, we demonstrate that our algorithm can effectively solve the premature convergence issue.
- Score: 10.129747457073389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting the latent information in high-dimensional and incomplete matrices
is an important and challenging issue. The Latent Factor Analysis (LFA) model
can well handle the high-dimensional matrices analysis. Recently, Particle
Swarm Optimization (PSO)-incorporated LFA models have been proposed to tune the
hyper-parameters adaptively with high efficiency. However, the incorporation of
PSO causes the premature problem. To address this issue, we propose a
sequential Adam-adjusting-antennae BAS (A2BAS) optimization algorithm, which
refines the latent factors obtained by the PSO-incorporated LFA model. The
A2BAS algorithm consists of two sub-algorithms. First, we design an improved
BAS algorithm which adjusts beetles' antennae and step-size with Adam; Second,
we implement the improved BAS algorithm to optimize all the row and column
latent factors sequentially. With experimental results on two real
high-dimensional matrices, we demonstrate that our algorithm can effectively
solve the premature convergence issue.
Related papers
- SGD with Partial Hessian for Deep Neural Networks Optimization [18.78728272603732]
We propose a compound, which is a combination of a second-order with a precise partial Hessian matrix for updating channel-wise parameters and the first-order gradient descent (SGD) algorithms for updating the other parameters.
Compared with first-orders, it adopts a certain amount of information from the Hessian matrix to assist optimization, while compared with the existing second-order generalizations, it keeps the good performance of first-order generalizations imprecise.
arXiv Detail & Related papers (2024-03-05T06:10:21Z) - Dynamic Incremental Optimization for Best Subset Selection [15.8362578568708]
Best subset selection is considered the gold standard for many learning problems.
An efficient subset-dual algorithm is developed based on the primal and dual problem structures.
arXiv Detail & Related papers (2024-02-04T02:26:40Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - An Adam-enhanced Particle Swarm Optimizer for Latent Factor Analysis [6.960453648000231]
We propose an Adam-enhanced Hierarchical PSO-LFA model, which refines the latent factors with a sequential PSO algorithm.
The experimental results on four real datasets demonstrate that our proposed model achieves higher prediction accuracy with its peers.
arXiv Detail & Related papers (2023-02-23T12:10:59Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Best Subset Selection with Efficient Primal-Dual Algorithm [24.568094642425837]
Best subset selection is considered the gold standard' for many learning problems.
In this paper, we investigate the dual forms of a family of $ell$-regularized problems.
An efficient primal-dual method has been developed based on the primal and dual problem structures.
arXiv Detail & Related papers (2022-07-05T14:07:52Z) - Adaptive First- and Second-Order Algorithms for Large-Scale Machine
Learning [3.0204520109309843]
We consider first- and second-order techniques to address continuous optimization problems in machine learning.
In the first-order case, we propose a framework of transition from semi-deterministic to quadratic regularization methods.
In the second-order case, we propose a novel first-order algorithm with adaptive sampling and adaptive step size.
arXiv Detail & Related papers (2021-11-29T18:10:00Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Joint Deep Reinforcement Learning and Unfolding: Beam Selection and
Precoding for mmWave Multiuser MIMO with Lens Arrays [54.43962058166702]
millimeter wave (mmWave) multiuser multiple-input multiple-output (MU-MIMO) systems with discrete lens arrays have received great attention.
In this work, we investigate the joint design of a beam precoding matrix for mmWave MU-MIMO systems with DLA.
arXiv Detail & Related papers (2021-01-05T03:55:04Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.