A Momentum-Incorporated Non-Negative Latent Factorization of Tensors
Model for Dynamic Network Representation
- URL: http://arxiv.org/abs/2305.02782v1
- Date: Thu, 4 May 2023 12:30:53 GMT
- Title: A Momentum-Incorporated Non-Negative Latent Factorization of Tensors
Model for Dynamic Network Representation
- Authors: Aoling Zeng
- Abstract summary: A large-scale dynamic network (LDN) is a source of data in many big data-related applications.
A Latent factorization of tensors (LFT) model efficiently extracts this time pattern.
LFT models based on gradient descent (SGD) solvers are often limited by training schemes and have poor tail convergence.
This paper proposes a novel nonlinear LFT model (MNNL) based on momentum-ind SGD to make training unconstrained and compatible with general training schemes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A large-scale dynamic network (LDN) is a source of data in many big
data-related applications due to their large number of entities and large-scale
dynamic interactions. They can be modeled as a high-dimensional incomplete
(HDI) tensor that contains a wealth of knowledge about time patterns. A Latent
factorization of tensors (LFT) model efficiently extracts this time pattern,
which can be established using stochastic gradient descent (SGD) solvers.
However, LFT models based on SGD are often limited by training schemes and have
poor tail convergence. To solve this problem, this paper proposes a novel
nonlinear LFT model (MNNL) based on momentum-incorporated SGD, which extracts
non-negative latent factors from HDI tensors to make training unconstrained and
compatible with general training schemes, while improving convergence accuracy
and speed. Empirical studies on two LDN datasets show that compared to existing
models, the MNNL model has higher prediction accuracy and convergence speed.
Related papers
- Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Large-scale Dynamic Network Representation via Tensor Ring Decomposition [0.0]
Large-scale Dynamic Networks (LDNs) are becoming increasingly important in the Internet age.
This work proposes a model based on the Ring (TR) decomposition for efficient representation learning for a LDN.
Experimental studies on two real LDNs demonstrate that the propose method achieves higher accuracy than existing models.
arXiv Detail & Related papers (2023-04-18T08:02:48Z) - Deep Neural Network Based Accelerated Failure Time Models using Rank
Loss [0.0]
An accelerated failure time (AFT) model assumes a log-linear relationship between failure times and a set of covariates.
Deep neural networks (DNNs) have received a focal attention over the past decades and have achieved remarkable success in a variety of fields.
We propose to apply DNNs in fitting AFT models using a Gehan-type loss, combined with a sub-sampling technique.
arXiv Detail & Related papers (2022-06-13T08:38:18Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - ES-dRNN: A Hybrid Exponential Smoothing and Dilated Recurrent Neural
Network Model for Short-Term Load Forecasting [1.4502611532302039]
Short-term load forecasting (STLF) is challenging due to complex time series (TS)
This paper proposes a novel hybrid hierarchical deep learning model that deals with multiple seasonality.
It combines exponential smoothing (ES) and a recurrent neural network (RNN)
arXiv Detail & Related papers (2021-12-05T19:38:42Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - A Fully Tensorized Recurrent Neural Network [48.50376453324581]
We introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell.
This approach reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs.
arXiv Detail & Related papers (2020-10-08T18:24:12Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.