Dink-Net: Neural Clustering on Large Graphs
- URL: http://arxiv.org/abs/2305.18405v3
- Date: Fri, 14 Jul 2023 16:00:24 GMT
- Title: Dink-Net: Neural Clustering on Large Graphs
- Authors: Yue Liu, Ke Liang, Jun Xia, Sihang Zhou, Xihong Yang, Xinwang Liu,
Stan Z. Li
- Abstract summary: A deep graph clustering method (Dink-Net) is proposed with the idea of dilation and shrink.
By discriminating nodes, whether being corrupted by augmentations, representations are learned in a self-supervised manner.
The clustering distribution is optimized by minimizing the proposed cluster dilation loss and cluster shrink loss.
Compared to the runner-up, Dink-Net 9.62% achieves NMI improvement on the ogbn-papers100M dataset with 111 million nodes and 1.6 billion edges.
- Score: 59.10189693120368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep graph clustering, which aims to group the nodes of a graph into disjoint
clusters with deep neural networks, has achieved promising progress in recent
years. However, the existing methods fail to scale to the large graph with
million nodes. To solve this problem, a scalable deep graph clustering method
(Dink-Net) is proposed with the idea of dilation and shrink. Firstly, by
discriminating nodes, whether being corrupted by augmentations, representations
are learned in a self-supervised manner. Meanwhile, the cluster centres are
initialized as learnable neural parameters. Subsequently, the clustering
distribution is optimized by minimizing the proposed cluster dilation loss and
cluster shrink loss in an adversarial manner. By these settings, we unify the
two-step clustering, i.e., representation learning and clustering optimization,
into an end-to-end framework, guiding the network to learn clustering-friendly
features. Besides, Dink-Net scales well to large graphs since the designed loss
functions adopt the mini-batch data to optimize the clustering distribution
even without performance drops. Both experimental results and theoretical
analyses demonstrate the superiority of our method. Compared to the runner-up,
Dink-Net achieves 9.62% NMI improvement on the ogbn-papers100M dataset with 111
million nodes and 1.6 billion edges. The source code is released at
https://github.com/yueliu1999/Dink-Net. Besides, a collection (papers, codes,
and datasets) of deep graph clustering is shared at
https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering.
Related papers
- Cluster-based Graph Collaborative Filtering [55.929052969825825]
Graph Convolution Networks (GCNs) have succeeded in learning user and item representations for recommendation systems.
Most existing GCN-based methods overlook the multiple interests of users while performing high-order graph convolution.
We propose a novel GCN-based recommendation model, termed Cluster-based Graph Collaborative Filtering (ClusterGCF)
arXiv Detail & Related papers (2024-04-16T07:05:16Z) - Learning Uniform Clusters on Hypersphere for Deep Graph-level Clustering [25.350054742471816]
We propose a novel deep graph-level clustering method called Uniform Deep Graph Clustering (UDGC)
UDGC assigns instances evenly to different clusters and then scatters those clusters on unit hypersphere, leading to a more uniform cluster-level distribution and a slighter cluster collapse.
Our empirical study on eight well-known datasets demonstrates that UDGC significantly outperforms the state-of-the-art models.
arXiv Detail & Related papers (2023-11-23T12:08:20Z) - Reinforcement Graph Clustering with Unknown Cluster Number [91.4861135742095]
We propose a new deep graph clustering method termed Reinforcement Graph Clustering.
In our proposed method, cluster number determination and unsupervised representation learning are unified into a uniform framework.
In order to conduct feedback actions, the clustering-oriented reward function is proposed to enhance the cohesion of the same clusters and separate the different clusters.
arXiv Detail & Related papers (2023-08-13T18:12:28Z) - EGRC-Net: Embedding-induced Graph Refinement Clustering Network [66.44293190793294]
We propose a novel graph clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net)
EGRC-Net effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance.
Our proposed methods consistently outperform several state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T09:08:43Z) - Total Variation Graph Neural Networks [5.571369922847262]
Recently proposed Graph Neural Networks (GNNs) are trained with an unsupervised minimum cut objective.
We propose a GNN model that computes cluster assignments by optimizing a tighter relaxation of the minimum cut.
arXiv Detail & Related papers (2022-11-11T14:13:14Z) - Scalable Deep Graph Clustering with Random-walk based Self-supervised
Learning [0.0]
We show that our scalable deep clustering algorithm, RwSL, can continue to scale beyond graphs with more than one million nodes.
We also demonstrate how RwSL could perform node clustering on a graph with 1.8 billion edges using only a single GPU.
arXiv Detail & Related papers (2021-12-31T16:12:23Z) - Self-supervised Contrastive Attributed Graph Clustering [110.52694943592974]
We propose a novel attributed graph clustering network, namely Self-supervised Contrastive Attributed Graph Clustering (SCAGC)
In SCAGC, by leveraging inaccurate clustering labels, a self-supervised contrastive loss, are designed for node representation learning.
For the OOS nodes, SCAGC can directly calculate their clustering labels.
arXiv Detail & Related papers (2021-10-15T03:25:28Z) - Learning Hierarchical Graph Neural Networks for Image Clustering [81.5841862489509]
We propose a hierarchical graph neural network (GNN) model that learns how to cluster a set of images into an unknown number of identities.
Our hierarchical GNN uses a novel approach to merge connected components predicted at each level of the hierarchy to form a new graph at the next level.
arXiv Detail & Related papers (2021-07-03T01:28:42Z) - Interpretable Clustering on Dynamic Graphs with Recurrent Graph Neural
Networks [24.017988997693262]
We study the problem of clustering nodes in a dynamic graph, where the connections between nodes and nodes' cluster memberships may change over time.
We first propose a simple decay-based clustering algorithm that clusters nodes based on weighted connections between them, where the weight decreases at a fixed rate over time.
We characterize the optimal decay rate for each cluster and propose a clustering method that achieves almost exact recovery of the true clusters.
arXiv Detail & Related papers (2020-12-16T04:31:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.