Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks
- URL: http://arxiv.org/abs/2312.00232v1
- Date: Thu, 30 Nov 2023 22:32:24 GMT
- Title: Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks
- Authors: Alexander M\"ollers, Alexander Immer, Elvin Isufi, Vincent Fortuin
- Abstract summary: We show that a variational Bayesian neural network approach can be used to improve uncertainty estimates.
We propose a new measure of uncertainty for contrastive learning, that is based on the disagreement in likelihood due to different positive samples.
- Score: 101.56637264703058
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph contrastive learning has shown great promise when labeled data is
scarce, but large unlabeled datasets are available. However, it often does not
take uncertainty estimation into account. We show that a variational Bayesian
neural network approach can be used to improve not only the uncertainty
estimates but also the downstream performance on semi-supervised
node-classification tasks. Moreover, we propose a new measure of uncertainty
for contrastive learning, that is based on the disagreement in likelihood due
to different positive samples.
Related papers
- Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Training Uncertainty-Aware Classifiers with Conformalized Deep Learning [7.837881800517111]
Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty.
We develop a novel training algorithm that can lead to more dependable uncertainty estimates, without sacrificing predictive power.
arXiv Detail & Related papers (2022-05-12T05:08:10Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Bayesian Neural Networks for Reversible Steganography [0.7614628596146599]
We propose to consider uncertainty in predictive models based upon a theoretical framework of Bayesian deep learning.
We approximate the posterior predictive distribution through Monte Carlo sampling with reversible forward passes.
We show that predictive uncertainty can be disentangled into aleatoric uncertainties and these quantities can be learnt in an unsupervised manner.
arXiv Detail & Related papers (2022-01-07T14:56:33Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Uncertainty Aware Semi-Supervised Learning on Graph Data [18.695343563823798]
We propose a multi-source uncertainty framework using a graph neural network (GNN) for node classification predictions.
By collecting evidence from the labels of training nodes, the Graph-based Kernel Dirichlet distribution Estimation (GKDE) method is designed for accurately predicting node-level Dirichlet distributions.
We found that dissonance-based detection yielded the best results on misclassification detection while vacuity-based detection was the best for OOD detection.
arXiv Detail & Related papers (2020-10-24T04:56:46Z) - Uncertainty Estimation for End-To-End Learned Dense Stereo Matching via
Probabilistic Deep Learning [0.0]
A novel probabilistic neural network is presented for the task of joint depth and uncertainty estimation from epipolar rectified stereo image pairs.
The network learns a probability distribution from which parameters are sampled for every prediction.
The quality of the estimated depth and uncertainty information is assessed in an extensive evaluation on three different datasets.
arXiv Detail & Related papers (2020-02-10T11:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.