SCGC : Self-Supervised Contrastive Graph Clustering
- URL: http://arxiv.org/abs/2204.12656v1
- Date: Wed, 27 Apr 2022 01:38:46 GMT
- Title: SCGC : Self-Supervised Contrastive Graph Clustering
- Authors: Gayan K. Kulatilleke, Marius Portmann, Shekhar S. Chandra
- Abstract summary: Graph clustering discovers groups or communities within networks.
Deep learning methods such as autoencoders cannot incorporate rich structural information.
We propose Self-Supervised Contrastive Graph Clustering (SCGC)
- Score: 1.1470070927586016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph clustering discovers groups or communities within networks. Deep
learning methods such as autoencoders (AE) extract effective clustering and
downstream representations but cannot incorporate rich structural information.
While Graph Neural Networks (GNN) have shown great success in encoding graph
structure, typical GNNs based on convolution or attention variants suffer from
over-smoothing, noise, heterophily, are computationally expensive and typically
require the complete graph being present. Instead, we propose Self-Supervised
Contrastive Graph Clustering (SCGC), which imposes graph-structure via
contrastive loss signals to learn discriminative node representations and
iteratively refined soft cluster labels. We also propose SCGC*, with a more
effective, novel, Influence Augmented Contrastive (IAC) loss to fuse richer
structural information, and half the original model parameters. SCGC(*) is
faster with simple linear units, completely eliminate convolutions and
attention of traditional GNNs, yet efficiently incorporates structure. It is
impervious to layer depth and robust to over-smoothing, incorrect edges and
heterophily. It is scalable by batching, a limitation in many prior GNN models,
and trivially parallelizable. We obtain significant improvements over
state-of-the-art on a wide range of benchmark graph datasets, including images,
sensor data, text, and citation networks efficiently. Specifically, 20% on ARI
and 18% on NMI for DBLP; overall 55% reduction in training time and overall,
81% reduction on inference time. Our code is available at :
https://github.com/gayanku/SCGC
Related papers
- Synergistic Deep Graph Clustering Network [14.569867830074292]
We propose a graph clustering framework named Synergistic Deep Graph Clustering Network (SynC)
In our approach, we design a Transform Input Graph Auto-Encoder (TIGAE) to obtain high-quality embeddings for guiding structure augmentation.
Notably, representation learning and structure augmentation share weights, significantly reducing the number of model parameters.
arXiv Detail & Related papers (2024-06-22T09:40:34Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Redundancy-Free Self-Supervised Relational Learning for Graph Clustering [13.176413653235311]
We propose a novel self-supervised deep graph clustering method named Redundancy-Free Graph Clustering (R$2$FGC)
It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder.
Our experiments are performed on widely used benchmark datasets to validate the superiority of our R$2$FGC over state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-09T06:18:50Z) - EGRC-Net: Embedding-induced Graph Refinement Clustering Network [66.44293190793294]
We propose a novel graph clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net)
EGRC-Net effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance.
Our proposed methods consistently outperform several state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T09:08:43Z) - Learning heterophilious edge to drop: A general framework for boosting
graph neural networks [19.004710957882402]
This work aims at mitigating the negative impacts of heterophily by optimizing graph structure for the first time.
We propose a structure learning method called LHE to identify heterophilious edges to drop.
Experiments demonstrate the remarkable performance improvement of GNNs with emphLHE on multiple datasets across full spectrum of homophily level.
arXiv Detail & Related papers (2022-05-23T14:07:29Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Graph Clustering with Graph Neural Networks [5.305362965553278]
Graph Neural Networks (GNNs) have achieved state-of-the-art results on many graph analysis tasks.
Unsupervised problems on graphs, such as graph clustering, have proved more resistant to advances in GNNs.
We introduce Deep Modularity Networks (DMoN), an unsupervised pooling method inspired by the modularity measure of clustering quality.
arXiv Detail & Related papers (2020-06-30T15:30:49Z) - Graph Highway Networks [77.38665506495553]
Graph Convolution Networks (GCN) are widely used in learning graph representations due to their effectiveness and efficiency.
They suffer from the notorious over-smoothing problem, in which the learned representations converge to alike vectors when many layers are stacked.
We propose Graph Highway Networks (GHNet) which utilize gating units to balance the trade-off between homogeneity and heterogeneity in the GCN learning process.
arXiv Detail & Related papers (2020-04-09T16:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.