Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations
- URL: http://arxiv.org/abs/2201.01702v1
- Date: Tue, 4 Jan 2022 15:49:18 GMT
- Title: Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations
- Authors: Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen
- Abstract summary: Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
- Score: 94.41860307845812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervision is recently surging at its new frontier of graph learning.
It facilitates graph representations beneficial to downstream tasks; but its
success could hinge on domain knowledge for handcraft or the often expensive
trials and errors. Even its state-of-the-art representative, graph contrastive
learning (GraphCL), is not completely free of those needs as GraphCL uses a
prefabricated prior reflected by the ad-hoc manual selection of graph data
augmentations. Our work aims at advancing GraphCL by answering the following
questions: How to represent the space of graph augmented views? What principle
can be relied upon to learn a prior in that space? And what framework can be
constructed to learn the prior in tandem with contrastive learning?
Accordingly, we have extended the prefabricated discrete prior in the
augmentation set, to a learnable continuous prior in the parameter space of
graph generators, assuming that graph priors per se, similar to the concept of
image manifolds, can be learned by data generation. Furthermore, to form
contrastive views without collapsing to trivial solutions due to the prior
learnability, we have leveraged both principles of information minimization
(InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
Eventually, contrastive learning, InfoMin, and InfoBN are incorporated
organically into one framework of bi-level optimization. Our principled and
automated approach has proven to be competitive against the state-of-the-art
graph self-supervision methods, including GraphCL, on benchmarks of small
graphs; and shown even better generalizability on large-scale graphs, without
resorting to human expertise or downstream validation. Our code is publicly
released at https://github.com/Shen-Lab/GraphCL_Automated.
Related papers
- A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural Networks [10.794305560114903]
Self-Prompt is a prompting framework for graphs based on the model and data itself.
We introduce asymmetric graph contrastive learning for pretext to address heterophily and align the objectives of pretext and downstream tasks.
We conduct extensive experiments on 11 benchmark datasets to demonstrate its superiority.
arXiv Detail & Related papers (2023-10-16T12:58:04Z) - Time-aware Graph Structure Learning via Sequence Prediction on Temporal
Graphs [10.034072706245544]
We propose a Time-aware Graph Structure Learning (TGSL) approach via sequence prediction on temporal graphs.
In particular, it predicts time-aware context embedding and uses the Gumble-Top-K to select the closest candidate edges to this context embedding.
Experiments on temporal link prediction benchmarks demonstrate that TGSL yields significant gains for the popular TGNs such as TGAT and GraphMixer.
arXiv Detail & Related papers (2023-06-13T11:34:36Z) - Scaling R-GCN Training with Graph Summarization [71.06855946732296]
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph.
In this work, we experiment with the use of graph summarization techniques to compress the graph.
We obtain reasonable results on the AIFB, MUTAG and AM datasets.
arXiv Detail & Related papers (2022-03-05T00:28:43Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Graph Contrastive Learning Automated [94.41860307845812]
Graph contrastive learning (GraphCL) has emerged with promising representation learning performance.
The effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset.
This paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data.
arXiv Detail & Related papers (2021-06-10T16:35:27Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Contrastive Self-supervised Learning for Graph Classification [21.207647143672585]
We propose two approaches based on contrastive self-supervised learning (CSSL) to alleviate overfitting.
In the first approach, we use CSSL to pretrain graph encoders on widely-available unlabeled graphs without relying on human-provided labels.
In the second approach, we develop a regularizer based on CSSL, and solve the supervised classification task and the unsupervised CSSL task simultaneously.
arXiv Detail & Related papers (2020-09-13T05:12:55Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.