Quaternion-Based Graph Convolution Network for Recommendation
- URL: http://arxiv.org/abs/2111.10536v1
- Date: Sat, 20 Nov 2021 07:42:18 GMT
- Title: Quaternion-Based Graph Convolution Network for Recommendation
- Authors: Yaxing Fang, Pengpeng Zhao, Guanfeng Liu, Yanchi Liu, Victor S. Sheng,
Lei Zhao, Xiaofang Zhou
- Abstract summary: Graph Convolution Network (GCN) has been widely applied in recommender systems.
GCN is vulnerable to noisy and incomplete graphs, which are common in real world.
We propose a Quaternion-based Graph Convolution Network (QGCN) recommendation model.
- Score: 45.005089037955536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolution Network (GCN) has been widely applied in recommender
systems for its representation learning capability on user and item embeddings.
However, GCN is vulnerable to noisy and incomplete graphs, which are common in
real world, due to its recursive message propagation mechanism. In the
literature, some work propose to remove the feature transformation during
message propagation, but making it unable to effectively capture the graph
structural features. Moreover, they model users and items in the Euclidean
space, which has been demonstrated to have high distortion when modeling
complex graphs, further degrading the capability to capture the graph
structural features and leading to sub-optimal performance. To this end, in
this paper, we propose a simple yet effective Quaternion-based Graph
Convolution Network (QGCN) recommendation model. In the proposed model, we
utilize the hyper-complex Quaternion space to learn user and item
representations and feature transformation to improve both performance and
robustness. Specifically, we first embed all users and items into the
Quaternion space. Then, we introduce the quaternion embedding propagation
layers with quaternion feature transformation to perform message propagation.
Finally, we combine the embeddings generated at each layer with the mean
pooling strategy to obtain the final embeddings for recommendation. Extensive
experiments on three public benchmark datasets demonstrate that our proposed
QGCN model outperforms baseline methods by a large margin.
Related papers
- DiRW: Path-Aware Digraph Learning for Heterophily [23.498557237805414]
Graph neural network (GNN) has emerged as a powerful representation learning tool for graph-structured data.
We propose Directed Random Walk (DiRW), which can be viewed as a plug-and-play strategy or an innovative neural architecture.
DiRW incorporates a direction-aware path sampler optimized from perspectives of walk probability, length, and number.
arXiv Detail & Related papers (2024-10-14T09:26:56Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Amplify Graph Learning for Recommendation via Sparsity Completion [16.32861024767423]
Graph learning models have been widely deployed in collaborative filtering (CF) based recommendation systems.
Due to the issue of data sparsity, the graph structure of the original input lacks potential positive preference edges.
We propose an Amplify Graph Learning framework based on Sparsity Completion (called AGL-SC)
arXiv Detail & Related papers (2024-06-27T08:26:20Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Neighborhood Convolutional Network: A New Paradigm of Graph Neural
Networks for Node Classification [12.062421384484812]
Graph Convolutional Network (GCN) decouples neighborhood aggregation and feature transformation in each convolutional layer.
In this paper, we propose a new paradigm of GCN, termed Neighborhood Convolutional Network (NCN)
In this way, the model could inherit the merit of decoupled GCN for aggregating neighborhood information, at the same time, develop much more powerful feature learning modules.
arXiv Detail & Related papers (2022-11-15T02:02:51Z) - Orthogonal Graph Neural Networks [53.466187667936026]
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
stacking more convolutional layers significantly decreases the performance of GNNs.
We propose a novel Ortho-GConv, which could generally augment the existing GNN backbones to stabilize the model training and improve the model's generalization performance.
arXiv Detail & Related papers (2021-09-23T12:39:01Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Locality Preserving Dense Graph Convolutional Networks with Graph
Context-Aware Node Representations [19.623379678611744]
Graph convolutional networks (GCNs) have been widely used for representation learning on graph data.
In many graph classification applications, GCN-based approaches have outperformed traditional methods.
We propose a locality-preserving dense GCN with graph context-aware node representations.
arXiv Detail & Related papers (2020-10-12T02:12:27Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.