A Dynamic-Neighbor Particle Swarm Optimizer for Accurate Latent Factor
Analysis
- URL: http://arxiv.org/abs/2302.11954v1
- Date: Thu, 23 Feb 2023 12:03:59 GMT
- Title: A Dynamic-Neighbor Particle Swarm Optimizer for Accurate Latent Factor
Analysis
- Authors: Jia Chen, Yixian Chun, Yuanyi Liu, Renyu Zhang and Yang Hu
- Abstract summary: The performance of an LFA model heavily rely on its optimization process.
Some prior studies employ the Particle Swarm Optimization to enhance an LFA model's optimization process.
This paper proposes a Dynamic-neighbor-cooperated Hierarchical PSO-enhanced LFA model with two-fold main ideas.
- Score: 8.451827165005993
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-Dimensional and Incomplete matrices, which usually contain a large
amount of valuable latent information, can be well represented by a Latent
Factor Analysis model. The performance of an LFA model heavily rely on its
optimization process. Thereby, some prior studies employ the Particle Swarm
Optimization to enhance an LFA model's optimization process. However, the
particles within the swarm follow the static evolution paths and only share the
global best information, which limits the particles' searching area to cause
sub-optimum issue. To address this issue, this paper proposes a
Dynamic-neighbor-cooperated Hierarchical PSO-enhanced LFA model with two-fold
main ideas. First is the neighbor-cooperated strategy, which enhances the
randomly chosen neighbor's velocity for particles' evolution. Second is the
dynamic hyper-parameter tunning. Extensive experiments on two benchmark
datasets are conducted to evaluate the proposed DHPL model. The results
substantiate that DHPL achieves a higher accuracy without hyper-parameters
tunning than the existing PSO-incorporated LFA models in representing an HDI
matrix.
Related papers
- Accelerated Preference Optimization for Large Language Model Alignment [60.22606527763201]
Reinforcement Learning from Human Feedback (RLHF) has emerged as a pivotal tool for aligning large language models (LLMs) with human preferences.
Direct Preference Optimization (DPO) formulates RLHF as a policy optimization problem without explicitly estimating the reward function.
We propose a general Accelerated Preference Optimization (APO) framework, which unifies many existing preference optimization algorithms.
arXiv Detail & Related papers (2024-10-08T18:51:01Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.
We identify two pivotal factors in model parameter learning: update direction and update method.
In particular, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - Mini-Hes: A Parallelizable Second-order Latent Factor Analysis Model [8.06111903129142]
This paper proposes a miniblock diagonal hessian-free (Mini-Hes) optimization for building an LFA model.
Experiment results indicate that, with Mini-Hes, the LFA model outperforms several state-of-the-art models in addressing missing data estimation task.
arXiv Detail & Related papers (2024-02-19T08:43:00Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - An Adam-enhanced Particle Swarm Optimizer for Latent Factor Analysis [6.960453648000231]
We propose an Adam-enhanced Hierarchical PSO-LFA model, which refines the latent factors with a sequential PSO algorithm.
The experimental results on four real datasets demonstrate that our proposed model achieves higher prediction accuracy with its peers.
arXiv Detail & Related papers (2023-02-23T12:10:59Z) - A Practical Second-order Latent Factor Model via Distributed Particle
Swarm Optimization [5.199454801210509]
Hessian-free (HF) optimization is an efficient method to utilizing second-order information of an LF model's objective function.
A practical SLF (PSLF) model is proposed in this work.
Experiments on real HiDS data sets indicate that PSLF model has a competitive advantage over state-of-the-art models in data representation ability.
arXiv Detail & Related papers (2022-08-12T05:49:08Z) - Adaptive Latent Factor Analysis via Generalized Momentum-Incorporated
Particle Swarm Optimization [6.2303427193075755]
A gradient descent (SGD) algorithm is an effective learning strategy to build a latent factor analysis (LFA) model on a high-dimensional and incomplete (HDI) matrix.
A particle swarm optimization (PSO) algorithm is commonly adopted to make an SGD-based LFA model's hyper- parameters, i.e., learning rate and regularization coefficient, self-adaptation.
This paper incorporates more historical information into each particle's evolutionary process for avoiding premature convergence.
arXiv Detail & Related papers (2022-08-04T03:15:07Z) - An Adaptive Alternating-direction-method-based Nonnegative Latent Factor
Model [2.857044909410376]
An alternating-direction-method-based nonnegative latent factor model can perform efficient representation learning to a high-dimensional and incomplete (HDI) matrix.
This paper proposes an Adaptive Alternating-direction-method-based Nonnegative Latent Factor model, whose hyper- parameter adaptation is implemented following the principle of particle swarm optimization.
Empirical studies on nonnegative HDI matrices generated by industrial applications indicate that A2NLF outperforms several state-of-the-art models in terms of computational and storage efficiency, as well as maintains highly competitive estimation accuracy for an HDI matrix's missing data
arXiv Detail & Related papers (2022-04-11T03:04:26Z) - Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD [73.55632827932101]
We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
arXiv Detail & Related papers (2021-10-26T15:02:27Z) - MOFA: Modular Factorial Design for Hyperparameter Optimization [47.779983311833014]
MOdular FActorial Design (MOFA) is a novel HPO method that exploits evaluation results through factorial analysis.
We prove that the inference of MOFA achieves higher confidence than other sampling schemes.
Empirical results show that MOFA achieves better effectiveness and efficiency compared with state-of-the-art methods.
arXiv Detail & Related papers (2020-11-18T20:54:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.