Large-scale Dynamic Network Representation via Tensor Ring Decomposition
- URL: http://arxiv.org/abs/2304.08798v1
- Date: Tue, 18 Apr 2023 08:02:48 GMT
- Title: Large-scale Dynamic Network Representation via Tensor Ring Decomposition
- Authors: Qu Wang
- Abstract summary: Large-scale Dynamic Networks (LDNs) are becoming increasingly important in the Internet age.
This work proposes a model based on the Ring (TR) decomposition for efficient representation learning for a LDN.
Experimental studies on two real LDNs demonstrate that the propose method achieves higher accuracy than existing models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large-scale Dynamic Networks (LDNs) are becoming increasingly important in
the Internet age, yet the dynamic nature of these networks captures the
evolution of the network structure and how edge weights change over time,
posing unique challenges for data analysis and modeling. A Latent Factorization
of Tensors (LFT) model facilitates efficient representation learning for a LDN.
But the existing LFT models are almost based on Canonical Polyadic
Factorization (CPF). Therefore, this work proposes a model based on Tensor Ring
(TR) decomposition for efficient representation learning for a LDN.
Specifically, we incorporate the principle of single latent factor-dependent,
non-negative, and multiplicative update (SLF-NMU) into the TR decomposition
model, and analyze the particular bias form of TR decomposition. Experimental
studies on two real LDNs demonstrate that the propose method achieves higher
accuracy than existing models.
Related papers
- Efficient Frequency Selective Surface Analysis via End-to-End Model-Based Learning [2.66269503676104]
This paper introduces an innovative end-to-end model-based deep learning approach for efficient electromagnetic analysis of high-dimensional frequency selective surfaces (FSS)
Unlike traditional data-driven methods that require large datasets, this approach combines physical insights from equivalent circuit models with deep learning techniques to significantly reduce model complexity and enhance prediction accuracy.
arXiv Detail & Related papers (2024-10-22T07:27:20Z) - Fuzzy Recurrent Stochastic Configuration Networks for Industrial Data Analytics [3.8719670789415925]
This paper presents a novel neuro-fuzzy model, termed fuzzy recurrent configuration networks (F-RSCNs) for industrial data analytics.
The proposed F-RSCN is constructed by multiple sub-reservoirs, and each sub-reservoir is associated with a Takagi-Sugeno-Kang (TSK) fuzzy rule.
By integrating TSK fuzzy inference systems into RSCNs, F-RSCNs have strong fuzzy inference capability and can achieve sound performance for both learning and generalization.
arXiv Detail & Related papers (2024-07-06T01:40:31Z) - Recurrent neural networks and transfer learning for elasto-plasticity in
woven composites [0.0]
This article presents Recurrent Neural Network (RNN) models as a surrogate for computationally intensive meso-scale simulation of woven composites.
A mean-field model generates a comprehensive data set representing elasto-plastic behavior.
In simulations, arbitrary six-dimensional strain histories are used to predict stresses under random walking as the source task and cyclic loading conditions as the target task.
arXiv Detail & Related papers (2023-11-22T14:47:54Z) - A Momentum-Incorporated Non-Negative Latent Factorization of Tensors
Model for Dynamic Network Representation [0.0]
A large-scale dynamic network (LDN) is a source of data in many big data-related applications.
A Latent factorization of tensors (LFT) model efficiently extracts this time pattern.
LFT models based on gradient descent (SGD) solvers are often limited by training schemes and have poor tail convergence.
This paper proposes a novel nonlinear LFT model (MNNL) based on momentum-ind SGD to make training unconstrained and compatible with general training schemes.
arXiv Detail & Related papers (2023-05-04T12:30:53Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z) - Causality-aware counterfactual confounding adjustment for feature
representations learned by deep models [14.554818659491644]
Causal modeling has been recognized as a potential solution to many challenging problems in machine learning (ML)
We describe how a recently proposed counterfactual approach can still be used to deconfound the feature representations learned by deep neural network (DNN) models.
arXiv Detail & Related papers (2020-04-20T17:37:36Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.