Graph Self-supervised Learning with Accurate Discrepancy Learning
- URL: http://arxiv.org/abs/2202.02989v1
- Date: Mon, 7 Feb 2022 08:04:59 GMT
- Title: Graph Self-supervised Learning with Accurate Discrepancy Learning
- Authors: Dongki Kim, Jinheon Baek, Sung Ju Hwang
- Abstract summary: We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
- Score: 64.69095775258164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning of graph neural networks (GNNs) aims to learn an
accurate representation of the graphs in an unsupervised manner, to obtain
transferable representations of them for diverse downstream tasks. Predictive
learning and contrastive learning are the two most prevalent approaches for
graph self-supervised learning. However, they have their own drawbacks. While
the predictive learning methods can learn the contextual relationships between
neighboring nodes and edges, they cannot learn global graph-level similarities.
Contrastive learning, while it can learn global graph-level similarities, its
objective to maximize the similarity between two differently perturbed graphs
may result in representations that cannot discriminate two similar graphs with
different properties. To tackle such limitations, we propose a framework that
aims to learn the exact discrepancy between the original and the perturbed
graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA).
Specifically, we create multiple perturbations of the given graph with varying
degrees of similarity and train the model to predict whether each graph is the
original graph or a perturbed one. Moreover, we further aim to accurately
capture the amount of discrepancy for each perturbed graph using the graph edit
distance. We validate our method on various graph-related downstream tasks,
including molecular property prediction, protein function prediction, and link
prediction tasks, on which our model largely outperforms relevant baselines.
Related papers
- Imbalanced Graph Classification with Multi-scale Oversampling Graph Neural Networks [25.12261412297796]
We introduce a novel multi-scale oversampling graph neural network (MOSGNN) that learns expressive minority graph representations.
It achieves this by jointly optimizing subgraph-level, graph-level, and pairwise-graph learning tasks.
Experiments on 16 imbalanced graph datasets show that MOSGNN i) significantly outperforms five state-of-the-art models.
arXiv Detail & Related papers (2024-05-08T09:16:54Z) - There is more to graphs than meets the eye: Learning universal features with self-supervision [2.882036130110936]
We study the problem of learning features through self-supervision that are generalisable to multiple graphs.
Our approach results in (1) better performance on downstream node classification, (2) learning features that can be re-used for unseen graphs of the same family, (3) more efficient training and (4) compact yet generalisable models.
arXiv Detail & Related papers (2023-05-31T14:08:48Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.