Bayesian Graph Contrastive Learning
- URL: http://arxiv.org/abs/2112.07823v1
- Date: Wed, 15 Dec 2021 01:45:32 GMT
- Title: Bayesian Graph Contrastive Learning
- Authors: Arman Hasanzadeh, Mohammadreza Armandpour, Ehsan Hajiramezanali,
Mingyuan Zhou, Nick Duffield, Krishna Narayanan
- Abstract summary: We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
- Score: 55.36652660268726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning has become a key component of self-supervised learning
approaches for graph-structured data. However, despite their success, existing
graph contrastive learning methods are incapable of uncertainty quantification
for node representations or their downstream tasks, limiting their application
in high-stakes domains. In this paper, we propose a novel Bayesian perspective
of graph contrastive learning methods showing random augmentations leads to
stochastic encoders. As a result, our proposed method represents each node by a
distribution in the latent space in contrast to existing techniques which embed
each node to a deterministic vector. By learning distributional
representations, we provide uncertainty estimates in downstream graph analytics
tasks and increase the expressive power of the predictive model. In addition,
we propose a Bayesian framework to infer the probability of perturbations in
each view of the contrastive model, eliminating the need for a computationally
expensive search for hyperparameter tuning. We empirically show a considerable
improvement in performance compared to existing state-of-the-art methods on
several benchmark datasets.
Related papers
- Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Creating generalizable downstream graph models with random projections [22.690120515637854]
We investigate graph representation learning approaches that enable models to generalize across graphs.
We show that using random projections to estimate multiple powers of the transition matrix allows us to build a set of isomorphism-invariant features.
The resulting features can be used to recover enough information about the local neighborhood of a node to enable inference with relevance competitive to other approaches.
arXiv Detail & Related papers (2023-02-17T14:27:00Z) - Graph Contrastive Learning with Implicit Augmentations [36.57536688367965]
Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-07T17:34:07Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Reachability analysis in stochastic directed graphs by reinforcement
learning [67.87998628083218]
We show that the dynamics of the transition probabilities in a Markov digraph can be modeled via a difference inclusion.
We offer a methodology to design reward functions to provide upper and lower bounds on the reachability probabilities of a set of nodes.
arXiv Detail & Related papers (2022-02-25T08:20:43Z) - Learning Robust Representation through Graph Adversarial Contrastive
Learning [6.332560610460623]
Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
arXiv Detail & Related papers (2022-01-31T07:07:51Z) - Fairness-Aware Node Representation Learning [9.850791193881651]
This study addresses fairness issues in graph contrastive learning with fairness-aware graph augmentation designs.
Different fairness notions on graphs are introduced, which serve as guidelines for the proposed graph augmentations.
Experimental results on real social networks are presented to demonstrate that the proposed augmentations can enhance fairness in terms of statistical parity and equal opportunity.
arXiv Detail & Related papers (2021-06-09T21:12:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.