An Unconstrained Symmetric Nonnegative Latent Factor Analysis for
Large-scale Undirected Weighted Networks
- URL: http://arxiv.org/abs/2208.04811v1
- Date: Tue, 9 Aug 2022 14:40:12 GMT
- Title: An Unconstrained Symmetric Nonnegative Latent Factor Analysis for
Large-scale Undirected Weighted Networks
- Authors: Zhe Xie, Weiling Li, and Yurong Zhong
- Abstract summary: Large-scale undirected weighted networks are usually found in big data-related research fields.
A symmetric non-negative latent-factor-analysis model is able to efficiently extract latent factors from an SHDI matrix.
This paper proposes an unconstrained symmetric nonnegative latent-factor-analysis model.
- Score: 0.22940141855172036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale undirected weighted networks are usually found in big
data-related research fields. It can naturally be quantified as a symmetric
high-dimensional and incomplete (SHDI) matrix for implementing big data
analysis tasks. A symmetric non-negative latent-factor-analysis (SNL) model is
able to efficiently extract latent factors (LFs) from an SHDI matrix. Yet it
relies on a constraint-combination training scheme, which makes it lack
flexibility. To address this issue, this paper proposes an unconstrained
symmetric nonnegative latent-factor-analysis (USNL) model. Its main idea is
two-fold: 1) The output LFs are separated from the decision parameters via
integrating a nonnegative mapping function into an SNL model; and 2) Stochastic
gradient descent (SGD) is adopted for implementing unconstrained model training
along with ensuring the output LFs nonnegativity. Empirical studies on four
SHDI matrices generated from real big data applications demonstrate that an
USNL model achieves higher prediction accuracy of missing data than an SNL
model, as well as highly competitive computational efficiency.
Related papers
- Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Self-Supervised Dataset Distillation for Transfer Learning [77.4714995131992]
We propose a novel problem of distilling an unlabeled dataset into a set of small synthetic samples for efficient self-supervised learning (SSL)
We first prove that a gradient of synthetic samples with respect to a SSL objective in naive bilevel optimization is textitbiased due to randomness originating from data augmentations or masking.
We empirically validate the effectiveness of our method on various applications involving transfer learning.
arXiv Detail & Related papers (2023-10-10T10:48:52Z) - A Dynamic Linear Bias Incorporation Scheme for Nonnegative Latent Factor
Analysis [5.029743143286546]
HDI data is commonly encountered in big data-related applications like social network services systems.
Nonnegative Latent Factor Analysis (NLFA) models have proven to possess the superiority to address this issue.
This paper innovatively presents the dynamic linear bias incorporation scheme.
arXiv Detail & Related papers (2023-09-19T13:48:26Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - Multi-constrained Symmetric Nonnegative Latent Factor Analysis for
Accurately Representing Large-scale Undirected Weighted Networks [2.1797442801107056]
An Undirected Weighted Network (UWN) is frequently encountered in a big-data-related application.
An analysis model should carefully consider its symmetric-topology for describing an UWN's intrinsic symmetry.
This paper proposes a Multi-constrained Symmetric Nonnegative Latent-factor-analysis model with two-fold ideas.
arXiv Detail & Related papers (2023-06-06T14:13:16Z) - Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach
to Highly-Accurate Representation of Undirected Weighted Networks [2.1797442801107056]
Undirected Weighted Network (UWN) is commonly found in big data-related applications.
Existing models fail in either modeling its intrinsic symmetry or low-data density.
Proximal Symmetric Nonnegative Latent-factor-analysis model is proposed.
arXiv Detail & Related papers (2023-06-06T13:03:24Z) - A Bayesian Framework on Asymmetric Mixture of Factor Analyser [0.0]
This paper introduces an MFA model with a rich and flexible class of skew normal (unrestricted) generalized hyperbolic (called SUNGH) distributions.
The SUNGH family provides considerable flexibility to model skewness in different directions as well as allowing for heavy tailed data.
Considering factor analysis models, the SUNGH family also allows for skewness and heavy tails for both the error component and factor scores.
arXiv Detail & Related papers (2022-11-01T20:19:52Z) - Second-order Symmetric Non-negative Latent Factor Analysis [3.1616300532562396]
This issue proposes to incorporate an efficient second-order method into SNLF.
The aim is to establish a second-order symmetric network model analysis model.
arXiv Detail & Related papers (2022-03-04T01:52:36Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.