Second-order Symmetric Non-negative Latent Factor Analysis
- URL: http://arxiv.org/abs/2203.02088v1
- Date: Fri, 4 Mar 2022 01:52:36 GMT
- Title: Second-order Symmetric Non-negative Latent Factor Analysis
- Authors: Weiling Li and Xin Luo
- Abstract summary: This issue proposes to incorporate an efficient second-order method into SNLF.
The aim is to establish a second-order symmetric network model analysis model.
- Score: 3.1616300532562396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Precise representation of large-scale undirected network is the basis for
understanding relations within a massive entity set. The undirected network
representation task can be efficiently addressed by a symmetry non-negative
latent factor (SNLF) model, whose objective is clearly non-convex. However,
existing SNLF models commonly adopt a first-order optimizer that cannot well
handle the non-convex objective, thereby resulting in inaccurate representation
results. On the other hand, higher-order learning algorithms are expected to
make a breakthrough, but their computation efficiency are greatly limited due
to the direct manipulation of the Hessian matrix, which can be huge in
undirected network representation tasks. Aiming at addressing this issue, this
study proposes to incorporate an efficient second-order method into SNLF,
thereby establishing a second-order symmetric non-negative latent factor
analysis model for undirected network with two-fold ideas: a) incorporating a
mapping strategy into SNLF model to form an unconstrained model, and b)
training the unconstrained model with a specially designed second order method
to acquire a proper second-order step efficiently. Empirical studies indicate
that proposed model outperforms state-of-the-art models in representation
accuracy with affordable computational burden.
Related papers
- LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - CoRMF: Criticality-Ordered Recurrent Mean Field Ising Solver [4.364088891019632]
We propose an RNN-based efficient Ising model solver, the Criticality-ordered Recurrent Mean Field (CoRMF)
By leveraging the approximated tree structure of the underlying Ising graph, the newly-obtained criticality order enables the unification between variational mean-field and RNN.
CoRFM solves the Ising problems in a self-train fashion without data/evidence, and the inference tasks can be executed by directly sampling from RNN.
arXiv Detail & Related papers (2024-03-05T16:55:06Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach
to Highly-Accurate Representation of Undirected Weighted Networks [2.1797442801107056]
Undirected Weighted Network (UWN) is commonly found in big data-related applications.
Existing models fail in either modeling its intrinsic symmetry or low-data density.
Proximal Symmetric Nonnegative Latent-factor-analysis model is proposed.
arXiv Detail & Related papers (2023-06-06T13:03:24Z) - An Unconstrained Symmetric Nonnegative Latent Factor Analysis for
Large-scale Undirected Weighted Networks [0.22940141855172036]
Large-scale undirected weighted networks are usually found in big data-related research fields.
A symmetric non-negative latent-factor-analysis model is able to efficiently extract latent factors from an SHDI matrix.
This paper proposes an unconstrained symmetric nonnegative latent-factor-analysis model.
arXiv Detail & Related papers (2022-08-09T14:40:12Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - PI-NLF: A Proportional-Integral Approach for Non-negative Latent Factor
Analysis [9.087387628717952]
A non-negative latent factor (NLF) model performs efficient representation learning to an HDI matrix.
A PI-NLF model outperforms the state-of-the-art models in both computational efficiency and estimation accuracy for missing data of an HDI matrix.
arXiv Detail & Related papers (2022-05-05T12:04:52Z) - Robust Binary Models by Pruning Randomly-initialized Networks [57.03100916030444]
We propose ways to obtain robust models against adversarial attacks from randomly-d binary networks.
We learn the structure of the robust model by pruning a randomly-d binary network.
Our method confirms the strong lottery ticket hypothesis in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T00:05:08Z) - Multi-objective Explanations of GNN Predictions [15.563499097282978]
Graph Neural Network (GNN) has achieved state-of-the-art performance in various high-stake prediction tasks.
Prior methods use simpler subgraphs to simulate the full model, or counterfactuals to identify the causes of a prediction.
arXiv Detail & Related papers (2021-11-29T16:08:03Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - COMBO: Conservative Offline Model-Based Policy Optimization [120.55713363569845]
Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable.
We develop a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-actions.
We find that COMBO consistently performs as well or better as compared to prior offline model-free and model-based methods.
arXiv Detail & Related papers (2021-02-16T18:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.