Cluster-based Graph Collaborative Filtering
- URL: http://arxiv.org/abs/2404.10321v2
- Date: Fri, 08 Nov 2024 13:29:47 GMT
- Title: Cluster-based Graph Collaborative Filtering
- Authors: Fan Liu, Shuai Zhao, Zhiyong Cheng, Liqiang Nie, Mohan Kankanhalli,
- Abstract summary: Graph Convolution Networks (GCNs) have succeeded in learning user and item representations for recommendation systems.
Most existing GCN-based methods overlook the multiple interests of users while performing high-order graph convolution.
We propose a novel GCN-based recommendation model, termed Cluster-based Graph Collaborative Filtering (ClusterGCF)
- Score: 55.929052969825825
- License:
- Abstract: Graph Convolution Networks (GCNs) have significantly succeeded in learning user and item representations for recommendation systems. The core of their efficacy is the ability to explicitly exploit the collaborative signals from both the first- and high-order neighboring nodes. However, most existing GCN-based methods overlook the multiple interests of users while performing high-order graph convolution. Thus, the noisy information from unreliable neighbor nodes (e.g., users with dissimilar interests) negatively impacts the representation learning of the target node. Additionally, conducting graph convolution operations without differentiating high-order neighbors suffers the over-smoothing issue when stacking more layers, resulting in performance degradation. In this paper, we aim to capture more valuable information from high-order neighboring nodes while avoiding noise for better representation learning of the target node. To achieve this goal, we propose a novel GCN-based recommendation model, termed Cluster-based Graph Collaborative Filtering (ClusterGCF). This model performs high-order graph convolution on cluster-specific graphs, which are constructed by capturing the multiple interests of users and identifying the common interests among them. Specifically, we design an unsupervised and optimizable soft node clustering approach to classify user and item nodes into multiple clusters. Based on the soft node clustering results and the topology of the user-item interaction graph, we assign the nodes with probabilities for different clusters to construct the cluster-specific graphs. To evaluate the effectiveness of ClusterGCF, we conducted extensive experiments on four publicly available datasets. Experimental results demonstrate that our model can significantly improve recommendation performance.
Related papers
- Redundancy-Free Self-Supervised Relational Learning for Graph Clustering [13.176413653235311]
We propose a novel self-supervised deep graph clustering method named Redundancy-Free Graph Clustering (R$2$FGC)
It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder.
Our experiments are performed on widely used benchmark datasets to validate the superiority of our R$2$FGC over state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-09T06:18:50Z) - Reinforcement Graph Clustering with Unknown Cluster Number [91.4861135742095]
We propose a new deep graph clustering method termed Reinforcement Graph Clustering.
In our proposed method, cluster number determination and unsupervised representation learning are unified into a uniform framework.
In order to conduct feedback actions, the clustering-oriented reward function is proposed to enhance the cohesion of the same clusters and separate the different clusters.
arXiv Detail & Related papers (2023-08-13T18:12:28Z) - Dink-Net: Neural Clustering on Large Graphs [59.10189693120368]
A deep graph clustering method (Dink-Net) is proposed with the idea of dilation and shrink.
By discriminating nodes, whether being corrupted by augmentations, representations are learned in a self-supervised manner.
The clustering distribution is optimized by minimizing the proposed cluster dilation loss and cluster shrink loss.
Compared to the runner-up, Dink-Net 9.62% achieves NMI improvement on the ogbn-papers100M dataset with 111 million nodes and 1.6 billion edges.
arXiv Detail & Related papers (2023-05-28T15:33:24Z) - Higher-order Clustering and Pooling for Graph Neural Networks [77.47617360812023]
Graph Neural Networks achieve state-of-the-art performance on a plethora of graph classification tasks.
HoscPool is a clustering-based graph pooling operator that captures higher-order information hierarchically.
We evaluate HoscPool on graph classification tasks and its clustering component on graphs with ground-truth community structure.
arXiv Detail & Related papers (2022-09-02T09:17:10Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - Self-supervised Contrastive Attributed Graph Clustering [110.52694943592974]
We propose a novel attributed graph clustering network, namely Self-supervised Contrastive Attributed Graph Clustering (SCAGC)
In SCAGC, by leveraging inaccurate clustering labels, a self-supervised contrastive loss, are designed for node representation learning.
For the OOS nodes, SCAGC can directly calculate their clustering labels.
arXiv Detail & Related papers (2021-10-15T03:25:28Z) - Seeing All From a Few: Nodes Selection Using Graph Pooling for Graph
Clustering [37.68977275752782]
noisy edges and nodes in the graph may make the clustering results worse.
We propose a novel dual graph embedding network(DGEN) to improve the robustness of the graph clustering to the noisy nodes and edges.
Experiments on three benchmark graph datasets demonstrate the superiority compared with several state-of-the-art algorithms.
arXiv Detail & Related papers (2021-04-30T06:51:51Z) - Graph InfoClust: Leveraging cluster-level node information for
unsupervised graph representation learning [12.592903558338444]
We propose a graph representation learning method called Graph InfoClust.
It seeks to additionally capture cluster-level information content.
This optimization leads the node representations to capture richer information and nodal interactions, which improves their quality.
arXiv Detail & Related papers (2020-09-15T09:33:20Z) - CAGNN: Cluster-Aware Graph Neural Networks for Unsupervised Graph
Representation Learning [19.432449825536423]
Unsupervised graph representation learning aims to learn low-dimensional node embeddings without supervision.
We present a novel cluster-aware graph neural network (CAGNN) model for unsupervised graph representation learning using self-supervised techniques.
arXiv Detail & Related papers (2020-09-03T13:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.