Attribute Graph Clustering via Learnable Augmentation
- URL: http://arxiv.org/abs/2212.03559v2
- Date: Thu, 28 Sep 2023 13:14:07 GMT
- Title: Attribute Graph Clustering via Learnable Augmentation
- Authors: Xihong Yang, Yue Liu, Ke Liang, Sihang Zhou, Xinwang Liu, En Zhu
- Abstract summary: Contrastive deep graph clustering (CDGC) utilizes contrastive learning to group nodes into different clusters.
We propose an Attribute Graph Clustering method via Learnable Augmentation (textbfAGCLA), which introduces learnable augmentors for high-quality augmented samples.
- Score: 71.36827095487294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive deep graph clustering (CDGC) utilizes contrastive learning to
group nodes into different clusters. Better augmentation techniques benefit the
quality of the contrastive samples, thus being one of key factors to improve
performance. However, the augmentation samples in existing methods are always
predefined by human experiences, and agnostic from the downstream task
clustering, thus leading to high human resource costs and poor performance. To
this end, we propose an Attribute Graph Clustering method via Learnable
Augmentation (\textbf{AGCLA}), which introduces learnable augmentors for
high-quality and suitable augmented samples for CDGC. Specifically, we design
two learnable augmentors for attribute and structure information, respectively.
Besides, two refinement matrices, including the high-confidence pseudo-label
matrix and the cross-view sample similarity matrix, are generated to improve
the reliability of the learned affinity matrix. During the training procedure,
we notice that there exist differences between the optimization goals for
training learnable augmentors and contrastive learning networks. In other
words, we should both guarantee the consistency of the embeddings as well as
the diversity of the augmented samples. Thus, an adversarial learning mechanism
is designed in our method. Moreover, a two-stage training strategy is leveraged
for the high-confidence refinement matrices. Extensive experimental results
demonstrate the effectiveness of AGCLA on six benchmark datasets.
Related papers
- Dual-perspective Cross Contrastive Learning in Graph Transformers [33.18813968554711]
Graph contrastive learning (GCL) is a popular method for leaning graph representations.
This paper proposes a framework termed dual-perspective cross graph contrastive learning (DC-GCL)
DC-GCL incorporates three modifications designed to enhance positive sample diversity and reliability.
arXiv Detail & Related papers (2024-06-01T11:11:49Z) - GCC: Generative Calibration Clustering [55.44944397168619]
We propose a novel Generative Clustering (GCC) method to incorporate feature learning and augmentation into clustering procedure.
First, we develop a discrimirative feature alignment mechanism to discover intrinsic relationship across real and generated samples.
Second, we design a self-supervised metric learning to generate more reliable cluster assignment.
arXiv Detail & Related papers (2024-04-14T01:51:11Z) - A Simplified Framework for Contrastive Learning for Node Representations [2.277447144331876]
We investigate the potential of deploying contrastive learning in combination with Graph Neural Networks for embedding nodes in a graph.
We show that the quality of the resulting embeddings and training time can be significantly improved by a simple column-wise postprocessing of the embedding matrix.
This modification yields improvements in downstream classification tasks of up to 1.5% and even beats existing state-of-the-art approaches on 6 out of 8 different benchmarks.
arXiv Detail & Related papers (2023-05-01T02:04:36Z) - Cluster-guided Contrastive Graph Clustering Network [53.16233290797777]
We propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC)
We construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks.
To construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples.
arXiv Detail & Related papers (2023-01-03T13:42:38Z) - Deep Attention-guided Graph Clustering with Dual Self-supervision [49.040136530379094]
We propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC)
We develop a dual self-supervision solution consisting of a soft self-supervision strategy with a triplet Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss.
Our method consistently outperforms state-of-the-art methods on six benchmark datasets.
arXiv Detail & Related papers (2021-11-10T06:53:03Z) - Graph Contrastive Clustering [131.67881457114316]
We propose a novel graph contrastive learning framework, which is then applied to the clustering task and we come up with the Graph Constrastive Clustering(GCC) method.
Specifically, on the one hand, the graph Laplacian based contrastive loss is proposed to learn more discriminative and clustering-friendly features.
On the other hand, a novel graph-based contrastive learning strategy is proposed to learn more compact clustering assignments.
arXiv Detail & Related papers (2021-04-03T15:32:49Z) - Mixing Consistent Deep Clustering [3.5786621294068373]
Good latent representations produce semantically mixed outputs when decoding linears of two latent representations.
We propose the Mixing Consistent Deep Clustering method which encourages representations to appear realistic.
We show that the proposed method can be added to existing autoencoders to further improve clustering performance.
arXiv Detail & Related papers (2020-11-03T19:47:06Z) - An unsupervised deep learning framework via integrated optimization of
representation learning and GMM-based modeling [31.334196673143257]
This paper introduces a new principle of joint learning on both deep representations and GMM-based deep modeling.
In comparison with the existing work in similar areas, our objective function has two learning targets, which are created to be jointly optimized.
The compactness of clusters is significantly enhanced by reducing the intra-cluster distances, and the separability is improved by increasing the inter-cluster distances.
arXiv Detail & Related papers (2020-09-11T04:57:03Z) - Learning to Cluster Faces via Confidence and Connectivity Estimation [136.5291151775236]
We propose a fully learnable clustering framework without requiring a large number of overlapped subgraphs.
Our method significantly improves clustering accuracy and thus performance of the recognition models trained on top, yet it is an order of magnitude more efficient than existing supervised methods.
arXiv Detail & Related papers (2020-04-01T13:39:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.