A Dynamic Linear Bias Incorporation Scheme for Nonnegative Latent Factor
Analysis
- URL: http://arxiv.org/abs/2309.10618v1
- Date: Tue, 19 Sep 2023 13:48:26 GMT
- Title: A Dynamic Linear Bias Incorporation Scheme for Nonnegative Latent Factor
Analysis
- Authors: Yurong Zhong, Zhe Xie, Weiling Li and Xin Luo
- Abstract summary: HDI data is commonly encountered in big data-related applications like social network services systems.
Nonnegative Latent Factor Analysis (NLFA) models have proven to possess the superiority to address this issue.
This paper innovatively presents the dynamic linear bias incorporation scheme.
- Score: 5.029743143286546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-Dimensional and Incomplete (HDI) data is commonly encountered in big
data-related applications like social network services systems, which are
concerning the limited interactions among numerous nodes. Knowledge acquisition
from HDI data is a vital issue in the domain of data science due to their
embedded rich patterns like node behaviors, where the fundamental task is to
perform HDI data representation learning. Nonnegative Latent Factor Analysis
(NLFA) models have proven to possess the superiority to address this issue,
where a linear bias incorporation (LBI) scheme is important in present the
training overshooting and fluctuation, as well as preventing the model from
premature convergence. However, existing LBI schemes are all statistic ones
where the linear biases are fixed, which significantly restricts the
scalability of the resultant NLFA model and results in loss of representation
learning ability to HDI data. Motivated by the above discoveries, this paper
innovatively presents the dynamic linear bias incorporation (DLBI) scheme. It
firstly extends the linear bias vectors into matrices, and then builds a binary
weight matrix to switch the active/inactive states of the linear biases. The
weight matrix's each entry switches between the binary states dynamically
corresponding to the linear bias value variation, thereby establishing the
dynamic linear biases for an NLFA model. Empirical studies on three HDI
datasets from real applications demonstrate that the proposed DLBI-based NLFA
model obtains higher representation accuracy several than state-of-the-art
models do, as well as highly-competitive computational efficiency.
Related papers
- Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach
to Highly-Accurate Representation of Undirected Weighted Networks [2.1797442801107056]
Undirected Weighted Network (UWN) is commonly found in big data-related applications.
Existing models fail in either modeling its intrinsic symmetry or low-data density.
Proximal Symmetric Nonnegative Latent-factor-analysis model is proposed.
arXiv Detail & Related papers (2023-06-06T13:03:24Z) - A Momentum-Incorporated Non-Negative Latent Factorization of Tensors
Model for Dynamic Network Representation [0.0]
A large-scale dynamic network (LDN) is a source of data in many big data-related applications.
A Latent factorization of tensors (LFT) model efficiently extracts this time pattern.
LFT models based on gradient descent (SGD) solvers are often limited by training schemes and have poor tail convergence.
This paper proposes a novel nonlinear LFT model (MNNL) based on momentum-ind SGD to make training unconstrained and compatible with general training schemes.
arXiv Detail & Related papers (2023-05-04T12:30:53Z) - How robust are pre-trained models to distribution shift? [82.08946007821184]
We show how spurious correlations affect the performance of popular self-supervised learning (SSL) and auto-encoder based models (AE)
We develop a novel evaluation scheme with the linear head trained on out-of-distribution (OOD) data, to isolate the performance of the pre-trained models from a potential bias of the linear head used for evaluation.
arXiv Detail & Related papers (2022-06-17T16:18:28Z) - Graph-incorporated Latent Factor Analysis for High-dimensional and
Sparse Matrices [9.51012204233452]
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system.
This paper proposes a graph-incorporated latent factor analysis (GLFA) model to perform representation learning on HiDS matrix.
Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix.
arXiv Detail & Related papers (2022-04-16T15:04:34Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z) - Causality-aware counterfactual confounding adjustment for feature
representations learned by deep models [14.554818659491644]
Causal modeling has been recognized as a potential solution to many challenging problems in machine learning (ML)
We describe how a recently proposed counterfactual approach can still be used to deconfound the feature representations learned by deep neural network (DNN) models.
arXiv Detail & Related papers (2020-04-20T17:37:36Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.