InfoGCL: Information-Aware Graph Contrastive Learning
- URL: http://arxiv.org/abs/2110.15438v1
- Date: Thu, 28 Oct 2021 21:10:39 GMT
- Title: InfoGCL: Information-Aware Graph Contrastive Learning
- Authors: Dongkuan Xu, Wei Cheng, Dongsheng Luo, Haifeng Chen, Xiang Zhang
- Abstract summary: We study how graph information is transformed and transferred during the contrastive learning process.
We propose an information-aware graph contrastive learning framework called InfoGCL.
We show for the first time that all recent graph contrastive learning methods can be unified by our framework.
- Score: 26.683911257080304
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Various graph contrastive learning models have been proposed to improve the
performance of learning tasks on graph datasets in recent years. While
effective and prevalent, these models are usually carefully customized. In
particular, although all recent researches create two contrastive views, they
differ greatly in view augmentations, architectures, and objectives. It remains
an open question how to build your graph contrastive learning model from
scratch for particular graph learning tasks and datasets. In this work, we aim
to fill this gap by studying how graph information is transformed and
transferred during the contrastive learning process and proposing an
information-aware graph contrastive learning framework called InfoGCL. The key
point of this framework is to follow the Information Bottleneck principle to
reduce the mutual information between contrastive parts while keeping
task-relevant information intact at both the levels of the individual module
and the entire framework so that the information loss during graph
representation learning can be minimized. We show for the first time that all
recent graph contrastive learning methods can be unified by our framework. We
empirically validate our theoretical analysis on both node and graph
classification benchmark datasets, and demonstrate that our algorithm
significantly outperforms the state-of-the-arts.
Related papers
- ENGAGE: Explanation Guided Data Augmentation for Graph Representation
Learning [34.23920789327245]
We propose ENGAGE, where explanation guides the contrastive augmentation process to preserve the key parts in graphs.
We also design two data augmentation schemes on graphs for perturbing structural and feature information, respectively.
arXiv Detail & Related papers (2023-07-03T14:33:14Z) - SEGA: Structural Entropy Guided Anchor View for Graph Contrastive
Learning [12.783251612977299]
In contrastive learning, the choice of view'' controls the information that the representation captures and influences the performance of the model.
An anchor view that maintains the essential information of input graphs for contrastive learning has been hardly investigated.
We extensively validate the proposed anchor view on various benchmarks regarding graph classification under unsupervised, semi-supervised, and transfer learning.
arXiv Detail & Related papers (2023-05-08T06:52:02Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - Adversarial Graph Contrastive Learning with Information Regularization [51.14695794459399]
Contrastive learning is an effective method in graph representation learning.
Data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples.
We propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL)
It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets.
arXiv Detail & Related papers (2022-02-14T05:54:48Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Group Contrastive Self-Supervised Learning on Graphs [101.45974132613293]
We study self-supervised learning on graphs using contrastive methods.
We argue that contrasting graphs in multiple subspaces enables graph encoders to capture more abundant characteristics.
arXiv Detail & Related papers (2021-07-20T22:09:21Z) - Graph Representation Learning by Ensemble Aggregating Subgraphs via
Mutual Information Maximization [5.419711903307341]
We introduce a self-supervised learning method to enhance the representations of graph-level learned by Graph Neural Networks.
To get a comprehensive understanding of the graph structure, we propose an ensemble-learning like subgraph method.
And to achieve efficient and effective contrasive learning, a Head-Tail contrastive samples construction method is proposed.
arXiv Detail & Related papers (2021-03-24T12:06:12Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.