Adaptive Cubic Regularized Second-Order Latent Factor Analysis Model
- URL: http://arxiv.org/abs/2507.03036v1
- Date: Thu, 03 Jul 2025 03:15:54 GMT
- Title: Adaptive Cubic Regularized Second-Order Latent Factor Analysis Model
- Authors: Jialiang Wang, Junzhou Wang, Xin Liao,
- Abstract summary: High-dimensional and incompleteHDI datasets have become ubiquitous across various real-world applications.<n>We propose a two-fold approach to mitigate information instabilities.<n>The ACRS HDI demonstrate that the ALF represents higher representation than the faster advancing (SACR) models.
- Score: 14.755426957558868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-dimensional and incomplete (HDI) data, characterized by massive node interactions, have become ubiquitous across various real-world applications. Second-order latent factor models have shown promising performance in modeling this type of data. Nevertheless, due to the bilinear and non-convex nature of the SLF model's objective function, incorporating a damping term into the Hessian approximation and carefully tuning associated parameters become essential. To overcome these challenges, we propose a new approach in this study, named the adaptive cubic regularized second-order latent factor analysis (ACRSLF) model. The proposed ACRSLF adopts the two-fold ideas: 1) self-tuning cubic regularization that dynamically mitigates non-convex optimization instabilities; 2) multi-Hessian-vector product evaluation during conjugate gradient iterations for precise second-order information assimilation. Comprehensive experiments on two industrial HDI datasets demonstrate that the ACRSLF converges faster and achieves higher representation accuracy than the advancing optimizer-based LFA models.
Related papers
- Sequential-Parallel Duality in Prefix Scannable Models [68.39855814099997]
Recent developments have given rise to various models, such as Gated Linear Attention (GLA) and Mamba.<n>This raises a natural question: can we characterize the full class of neural sequence models that support near-constant-time parallel evaluation and linear-time, constant-space sequential inference?
arXiv Detail & Related papers (2025-06-12T17:32:02Z) - LARES: Latent Reasoning for Sequential Recommendation [96.26996622771593]
We present LARES, a novel and scalable LAtent REasoning framework for Sequential recommendation.<n>Our proposed approach employs a recurrent architecture that allows flexible expansion of reasoning depth without increasing parameter complexity.<n>Our framework exhibits seamless compatibility with existing advanced models, further improving their recommendation performance.
arXiv Detail & Related papers (2025-05-22T16:22:54Z) - Automatically Learning Hybrid Digital Twins of Dynamical Systems [56.69628749813084]
Digital Twins (DTs) simulate the states and temporal dynamics of real-world systems.
DTs often struggle to generalize to unseen conditions in data-scarce settings.
In this paper, we propose an evolutionary algorithm ($textbfHDTwinGen$) to autonomously propose, evaluate, and optimize HDTwins.
arXiv Detail & Related papers (2024-10-31T07:28:22Z) - PSLF: A PID Controller-incorporated Second-order Latent Factor Analysis Model for Recommender System [11.650076383080526]
A second-order-based HDI model (SLF) analysis demonstrates superior performance in graph learning, particularly for high- and incomplete factor data rates.
arXiv Detail & Related papers (2024-08-31T13:01:58Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - A Practical Second-order Latent Factor Model via Distributed Particle
Swarm Optimization [5.199454801210509]
Hessian-free (HF) optimization is an efficient method to utilizing second-order information of an LF model's objective function.
A practical SLF (PSLF) model is proposed in this work.
Experiments on real HiDS data sets indicate that PSLF model has a competitive advantage over state-of-the-art models in data representation ability.
arXiv Detail & Related papers (2022-08-12T05:49:08Z) - Adaptive Divergence-based Non-negative Latent Factor Analysis [6.265179945530255]
This study presents an Adaptive Divergence-based Non-negative Latent Factor (ADNLF) model with three-fold ideas.
An ADNLF model achieves significantly higher estimation accuracy for missing data of an HDI dataset with high computational efficiency.
arXiv Detail & Related papers (2022-03-30T11:28:36Z) - A Class of Two-Timescale Stochastic EM Algorithms for Nonconvex Latent
Variable Models [21.13011760066456]
The Expectation-Maximization (EM) algorithm is a popular choice for learning variable models.
In this paper, we propose a general class of methods called Two-Time Methods.
arXiv Detail & Related papers (2022-03-18T22:46:34Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.