Self-Supervised Representation Learning via Latent Graph Prediction
- URL: http://arxiv.org/abs/2202.08333v1
- Date: Wed, 16 Feb 2022 21:10:33 GMT
- Title: Self-Supervised Representation Learning via Latent Graph Prediction
- Authors: Yaochen Xie, Zhao Xu, Shuiwang Ji
- Abstract summary: Self-supervised learning (SSL) of graph neural networks is emerging as a promising way of leveraging unlabeled data.
We propose the LaGraph, a theoretically grounded predictive SSL framework based on latent graph prediction.
Our experimental results demonstrate the superiority of LaGraph in performance and the robustness to decreasing of training sample size on both graph-level and node-level tasks.
- Score: 41.64774038444827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL) of graph neural networks is emerging as a
promising way of leveraging unlabeled data. Currently, most methods are based
on contrastive learning adapted from the image domain, which requires view
generation and a sufficient number of negative samples. In contrast, existing
predictive models do not require negative sampling, but lack theoretical
guidance on the design of pretext training tasks. In this work, we propose the
LaGraph, a theoretically grounded predictive SSL framework based on latent
graph prediction. Learning objectives of LaGraph are derived as self-supervised
upper bounds to objectives for predicting unobserved latent graphs. In addition
to its improved performance, LaGraph provides explanations for recent successes
of predictive models that include invariance-based objectives. We provide
theoretical analysis comparing LaGraph to related methods in different domains.
Our experimental results demonstrate the superiority of LaGraph in performance
and the robustness to decreasing of training sample size on both graph-level
and node-level tasks.
Related papers
- Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis [7.309233340654514]
This paper introduces a theoretical framework that rigorously analyzes graph prompting from a data operation perspective.
We provide a formal guarantee theorem, demonstrating graph prompts capacity to approximate graph transformation operators.
We derive upper bounds on the error of these data operations by graph prompts for a single graph and extend this discussion to batches of graphs.
arXiv Detail & Related papers (2024-10-02T15:07:13Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Learning on Random Balls is Sufficient for Estimating (Some) Graph
Parameters [28.50409304490877]
We develop a theoretical framework for graph classification problems in the partial observation setting.
We propose a new graph classification model that works on a randomly sampled subgraph.
arXiv Detail & Related papers (2021-11-05T08:32:46Z) - How Neural Processes Improve Graph Link Prediction [35.652234989200956]
We propose a meta-learning approach with graph neural networks for link prediction: Neural Processes for Graph Neural Networks (NPGNN)
NPGNN can perform both transductive and inductive learning tasks and adapt to patterns in a large new graph after training with a small subgraph.
arXiv Detail & Related papers (2021-09-30T07:35:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.