SEGA: Structural Entropy Guided Anchor View for Graph Contrastive
Learning
- URL: http://arxiv.org/abs/2305.04501v2
- Date: Fri, 9 Jun 2023 08:57:49 GMT
- Title: SEGA: Structural Entropy Guided Anchor View for Graph Contrastive
Learning
- Authors: Junran Wu, Xueyuan Chen, Bowen Shi, Shangzhe Li, Ke Xu
- Abstract summary: In contrastive learning, the choice of view'' controls the information that the representation captures and influences the performance of the model.
An anchor view that maintains the essential information of input graphs for contrastive learning has been hardly investigated.
We extensively validate the proposed anchor view on various benchmarks regarding graph classification under unsupervised, semi-supervised, and transfer learning.
- Score: 12.783251612977299
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In contrastive learning, the choice of ``view'' controls the information that
the representation captures and influences the performance of the model.
However, leading graph contrastive learning methods generally produce views via
random corruption or learning, which could lead to the loss of essential
information and alteration of semantic information. An anchor view that
maintains the essential information of input graphs for contrastive learning
has been hardly investigated. In this paper, based on the theory of graph
information bottleneck, we deduce the definition of this anchor view; put
differently, \textit{the anchor view with essential information of input graph
is supposed to have the minimal structural uncertainty}. Furthermore, guided by
structural entropy, we implement the anchor view, termed \textbf{SEGA}, for
graph contrastive learning. We extensively validate the proposed anchor view on
various benchmarks regarding graph classification under unsupervised,
semi-supervised, and transfer learning and achieve significant performance
boosts compared to the state-of-the-art methods.
Related papers
- Control-based Graph Embeddings with Data Augmentation for Contrastive Learning [3.250579305400297]
We study the problem of unsupervised graph representation learning by harnessing the control properties of dynamical networks defined on graphs.
A crucial step in contrastive learning is the creation of 'augmented' graphs from the input graphs.
Here, we propose a unique method for generating these augmented graphs by leveraging the control properties of networks.
arXiv Detail & Related papers (2024-03-07T22:14:04Z) - ENGAGE: Explanation Guided Data Augmentation for Graph Representation
Learning [34.23920789327245]
We propose ENGAGE, where explanation guides the contrastive augmentation process to preserve the key parts in graphs.
We also design two data augmentation schemes on graphs for perturbing structural and feature information, respectively.
arXiv Detail & Related papers (2023-07-03T14:33:14Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Learning Robust Representation through Graph Adversarial Contrastive
Learning [6.332560610460623]
Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
arXiv Detail & Related papers (2022-01-31T07:07:51Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Graph Structure Learning with Variational Information Bottleneck [70.62851953251253]
We propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL.
VIB-GSL learns an informative and compressive graph structure to distill the actionable information for specific downstream tasks.
arXiv Detail & Related papers (2021-12-16T14:22:13Z) - InfoGCL: Information-Aware Graph Contrastive Learning [26.683911257080304]
We study how graph information is transformed and transferred during the contrastive learning process.
We propose an information-aware graph contrastive learning framework called InfoGCL.
We show for the first time that all recent graph contrastive learning methods can be unified by our framework.
arXiv Detail & Related papers (2021-10-28T21:10:39Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.