DGEM: A New Dual-modal Graph Embedding Method in Recommendation System
- URL: http://arxiv.org/abs/2108.04031v1
- Date: Mon, 9 Aug 2021 13:31:56 GMT
- Title: DGEM: A New Dual-modal Graph Embedding Method in Recommendation System
- Authors: Huimin Zhou and Qing Li and Yong Jiang and Rongwei Yang and Zhuyun Qi
- Abstract summary: In the current deep learning based recommendation system, the embedding method is generally employed to complete the conversion from the high-dimensional sparse feature vector to the low-dimensional dense feature vector.
We propose the Dual-modal Graph Embedding Method (DGEM) to solve these problems.
- Score: 18.33515434926957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the current deep learning based recommendation system, the embedding
method is generally employed to complete the conversion from the
high-dimensional sparse feature vector to the low-dimensional dense feature
vector. However, as the dimension of the input vector of the embedding layer is
too large, the addition of the embedding layer significantly slows down the
convergence speed of the entire neural network, which is not acceptable in
real-world scenarios. In addition, as the interaction between users and items
increases and the relationship between items becomes more complicated, the
embedding method proposed for sequence data is no longer suitable for graphic
data in the current real environment. Therefore, in this paper, we propose the
Dual-modal Graph Embedding Method (DGEM) to solve these problems. DGEM includes
two modes, static and dynamic. We first construct the item graph to extract the
graph structure and use random walk of unequal probability to capture the
high-order proximity between the items. Then we generate the graph embedding
vector through the Skip-Gram model, and finally feed the downstream deep neural
network for the recommendation task. The experimental results show that DGEM
can mine the high-order proximity between items and enhance the expression
ability of the recommendation model. Meanwhile it also improves the
recommendation performance by utilizing the time dependent relationship between
items.
Related papers
- GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning [0.0]
Graph representation learning has emerged as a powerful tool for preserving graph topology when mapping nodes to vector representations.
Current graph neural network models face the challenge of requiring extensive labeled data.
We propose Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning.
arXiv Detail & Related papers (2024-09-12T03:09:05Z) - Amplify Graph Learning for Recommendation via Sparsity Completion [16.32861024767423]
Graph learning models have been widely deployed in collaborative filtering (CF) based recommendation systems.
Due to the issue of data sparsity, the graph structure of the original input lacks potential positive preference edges.
We propose an Amplify Graph Learning framework based on Sparsity Completion (called AGL-SC)
arXiv Detail & Related papers (2024-06-27T08:26:20Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Deep Manifold Learning with Graph Mining [80.84145791017968]
We propose a novel graph deep model with a non-gradient decision layer for graph mining.
The proposed model has achieved state-of-the-art performance compared to the current models.
arXiv Detail & Related papers (2022-07-18T04:34:08Z) - Causal Incremental Graph Convolution for Recommender System Retraining [89.25922726558875]
Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
arXiv Detail & Related papers (2021-08-16T04:20:09Z) - Learnable Hypergraph Laplacian for Hypergraph Learning [34.28748027233654]
HyperGraph Convolutional Neural Networks (HGCNNs) have demonstrated their potential in modeling high-order relations preserved in graph structured data.
We propose the first learning-based method tailored for constructing adaptive hypergraph structure, termed HypERgrAph Laplacian aDaptor (HERALD)
HERALD adaptively optimize the adjacency relationship between hypernodes and hyperedges in an end-to-end manner and thus the task-aware hypergraph is learned.
arXiv Detail & Related papers (2021-06-12T02:07:07Z) - Pyramidal Reservoir Graph Neural Network [18.632681846787246]
We propose a deep Graph Neural Network (GNN) model that alternates two types of layers.
We show how graph pooling can reduce the computational complexity of the model.
Our proposed approach to the design of RC-based GNNs offers an advantageous and principled trade-off between accuracy and complexity.
arXiv Detail & Related papers (2021-04-10T08:34:09Z) - RGCF: Refined Graph Convolution Collaborative Filtering with concise and
expressive embedding [42.46797662323393]
We develop a new GCN-based Collaborative Filtering model, named Refined Graph convolution Collaborative Filtering(RGCF)
RGCF is more capable for capturing the implicit high-order connectivities inside the graph and the resultant vector representations are more expressive.
We conduct extensive experiments on three public million-size datasets, demonstrating that our RGCF significantly outperforms state-of-the-art models.
arXiv Detail & Related papers (2020-07-07T12:26:10Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.