Attribute Graph Clustering via Learnable Augmentation
- URL: http://arxiv.org/abs/2212.03559v2
- Date: Thu, 28 Sep 2023 13:14:07 GMT
- Title: Attribute Graph Clustering via Learnable Augmentation
- Authors: Xihong Yang, Yue Liu, Ke Liang, Sihang Zhou, Xinwang Liu, En Zhu
- Abstract summary: Contrastive deep graph clustering (CDGC) utilizes contrastive learning to group nodes into different clusters.
We propose an Attribute Graph Clustering method via Learnable Augmentation (textbfAGCLA), which introduces learnable augmentors for high-quality augmented samples.
- Score: 71.36827095487294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive deep graph clustering (CDGC) utilizes contrastive learning to
group nodes into different clusters. Better augmentation techniques benefit the
quality of the contrastive samples, thus being one of key factors to improve
performance. However, the augmentation samples in existing methods are always
predefined by human experiences, and agnostic from the downstream task
clustering, thus leading to high human resource costs and poor performance. To
this end, we propose an Attribute Graph Clustering method via Learnable
Augmentation (\textbf{AGCLA}), which introduces learnable augmentors for
high-quality and suitable augmented samples for CDGC. Specifically, we design
two learnable augmentors for attribute and structure information, respectively.
Besides, two refinement matrices, including the high-confidence pseudo-label
matrix and the cross-view sample similarity matrix, are generated to improve
the reliability of the learned affinity matrix. During the training procedure,
we notice that there exist differences between the optimization goals for
training learnable augmentors and contrastive learning networks. In other
words, we should both guarantee the consistency of the embeddings as well as
the diversity of the augmented samples. Thus, an adversarial learning mechanism
is designed in our method. Moreover, a two-stage training strategy is leveraged
for the high-confidence refinement matrices. Extensive experimental results
demonstrate the effectiveness of AGCLA on six benchmark datasets.
Related papers
- Dual Boost-Driven Graph-Level Clustering Network [17.423787223848453]
We propose a novel Dual Boost-Driven Graph-Level Clustering Network (DBGCN) to alternately promote graph-level clustering and filtering out interference information.
In the pooling step, we evaluate the contribution of features at the global and optimize them using a learnable transformation matrix.
We first identify and suppress information detrimental to clustering by evaluating similarities between graph-level representations.
arXiv Detail & Related papers (2025-04-08T04:32:46Z) - GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning [0.0]
Graph representation learning has emerged as a powerful tool for preserving graph topology when mapping nodes to vector representations.
Current graph neural network models face the challenge of requiring extensive labeled data.
We propose Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning.
arXiv Detail & Related papers (2024-09-12T03:09:05Z) - Multi-Task Curriculum Graph Contrastive Learning with Clustering Entropy Guidance [25.5510013711661]
We propose the Clustering-guided Curriculum Graph contrastive Learning (CCGL) framework.
CCGL uses clustering entropy as the guidance of the following graph augmentation and contrastive learning.
Experimental results demonstrate that CCGL has achieved excellent performance compared to state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-22T02:18:47Z) - Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - LightGCL: Simple Yet Effective Graph Contrastive Learning for
Recommendation [9.181689366185038]
Graph neural clustering network (GNN) is a powerful learning approach for graph-based recommender systems.
In this paper, we propose a simple yet effective graph contrastive learning paradigm LightGCL.
arXiv Detail & Related papers (2023-02-16T10:16:21Z) - SMARTQUERY: An Active Learning Framework for Graph Neural Networks
through Hybrid Uncertainty Reduction [25.77052028238513]
We propose a framework to learn a graph neural network with very few labeled nodes using a hybrid uncertainty reduction function.
We demonstrate the competitive performance of our method against state-of-the-arts on very few labeled data.
arXiv Detail & Related papers (2022-12-02T20:49:38Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Learning to Cluster Faces via Confidence and Connectivity Estimation [136.5291151775236]
We propose a fully learnable clustering framework without requiring a large number of overlapped subgraphs.
Our method significantly improves clustering accuracy and thus performance of the recognition models trained on top, yet it is an order of magnitude more efficient than existing supervised methods.
arXiv Detail & Related papers (2020-04-01T13:39:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.