Community-Invariant Graph Contrastive Learning
- URL: http://arxiv.org/abs/2405.01350v1
- Date: Thu, 2 May 2024 14:59:58 GMT
- Title: Community-Invariant Graph Contrastive Learning
- Authors: Shiyin Tan, Dongyuan Li, Renhe Jiang, Ying Zhang, Manabu Okumura,
- Abstract summary: This research investigates the role of the graph community in graph augmentation.
We propose a community-invariant GCL framework to maintain graph community structure during learnable graph augmentation.
- Score: 21.72222875193335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph augmentation has received great attention in recent years for graph contrastive learning (GCL) to learn well-generalized node/graph representations. However, mainstream GCL methods often favor randomly disrupting graphs for augmentation, which shows limited generalization and inevitably leads to the corruption of high-level graph information, i.e., the graph community. Moreover, current knowledge-based graph augmentation methods can only focus on either topology or node features, causing the model to lack robustness against various types of noise. To address these limitations, this research investigated the role of the graph community in graph augmentation and figured out its crucial advantage for learnable graph augmentation. Based on our observations, we propose a community-invariant GCL framework to maintain graph community structure during learnable graph augmentation. By maximizing the spectral changes, this framework unifies the constraints of both topology and feature augmentation, enhancing the model's robustness. Empirical evidence on 21 benchmark datasets demonstrates the exclusive merits of our framework. Code is released on Github (https://github.com/ShiyinTan/CI-GCL.git).
Related papers
- Graph Augmentation for Recommendation [30.77695833436189]
Graph augmentation with contrastive learning has gained significant attention in the field of recommendation systems.
We propose a principled framework called GraphAug that generates denoised self-supervised signals, enhancing recommender systems.
The GraphAug framework incorporates a graph information bottleneck (GIB)-regularized augmentation paradigm, which automatically distills informative self-supervision information.
arXiv Detail & Related papers (2024-03-25T11:47:53Z) - Graph Contrastive Learning with Cohesive Subgraph Awareness [34.76555185419192]
Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs.
We argue that an awareness of subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance.
We propose a novel unified framework called CTAug, to seamlessly integrate cohesion awareness into various existing GCL mechanisms.
arXiv Detail & Related papers (2024-01-31T03:51:30Z) - Revisiting Graph Contrastive Learning from the Perspective of Graph
Spectrum [91.06367395889514]
Graph Contrastive Learning (GCL) learning the node representations by augmenting graphs has attracted considerable attentions.
We answer these questions by establishing the connection between GCL and graph spectrum.
We propose a spectral graph contrastive learning module (SpCo), which is a general and GCL-friendly plug-in.
arXiv Detail & Related papers (2022-10-05T15:32:00Z) - Graph Contrastive Learning with Personalized Augmentation [17.714437631216516]
Graph contrastive learning (GCL) has emerged as an effective tool for learning unsupervised representations of graphs.
We propose a principled framework, termed as textitGraph contrastive learning with textitPersonalized textitAugmentation (GPA)
GPA infers tailored augmentation strategies for each graph based on its topology and node attributes via a learnable augmentation selector.
arXiv Detail & Related papers (2022-09-14T11:37:48Z) - COSTA: Covariance-Preserving Feature Augmentation for Graph Contrastive
Learning [64.78221638149276]
We show that the node embedding obtained via the graph augmentations is highly biased.
Instead of investigating graph augmentation in the input space, we propose augmentations on the hidden features.
We show that the feature augmentation with COSTA achieves comparable/better results than graph augmentation based models.
arXiv Detail & Related papers (2022-06-09T18:46:38Z) - Learning Graph Augmentations to Learn Graph Representations [13.401746329218017]
LG2AR is an end-to-end automatic graph augmentation framework.
It helps encoders learn generalizable representations on both node and graph levels.
It achieves state-of-the-art results on 18 out of 20 graph-level and node-level benchmarks.
arXiv Detail & Related papers (2022-01-24T17:50:06Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations [12.820228374977441]
We propose a novel Collaborative Graph Contrastive Learning framework (CGCL)
This framework harnesses multiple graph encoders to observe the graph.
To ensure the collaboration among diverse graph encoders, we propose the concepts of asymmetric architecture and complementary encoders.
arXiv Detail & Related papers (2021-11-05T05:08:27Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.