Improvements on Uncertainty Quantification for Node Classification via
Distance-Based Regularization
- URL: http://arxiv.org/abs/2311.05795v1
- Date: Fri, 10 Nov 2023 00:00:20 GMT
- Title: Improvements on Uncertainty Quantification for Node Classification via
Distance-Based Regularization
- Authors: Russell Alan Hart, Linlin Yu, Yifei Lou, Feng Chen
- Abstract summary: Deep neural networks have achieved significant success in the last decades, but they are not well-calibrated and often produce unreliable predictions.
We propose a distance-based regularization that encourages clustered OOD nodes to remain clustered in the latent space.
We conduct extensive comparison experiments on eight standard datasets and demonstrate that the proposed regularization outperforms the state-of-the-art in both OOD detection and misclassification detection.
- Score: 4.121906004284458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have achieved significant success in the last decades,
but they are not well-calibrated and often produce unreliable predictions. A
large number of literature relies on uncertainty quantification to evaluate the
reliability of a learning model, which is particularly important for
applications of out-of-distribution (OOD) detection and misclassification
detection. We are interested in uncertainty quantification for interdependent
node-level classification. We start our analysis based on graph posterior
networks (GPNs) that optimize the uncertainty cross-entropy (UCE)-based loss
function. We describe the theoretical limitations of the widely-used UCE loss.
To alleviate the identified drawbacks, we propose a distance-based
regularization that encourages clustered OOD nodes to remain clustered in the
latent space. We conduct extensive comparison experiments on eight standard
datasets and demonstrate that the proposed regularization outperforms the
state-of-the-art in both OOD detection and misclassification detection.
Related papers
- Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning [5.318766629972959]
Uncertainty quantification is a crucial but challenging task in many high-dimensional regression or learning problems.
We develop a new data-driven approach for UQ in regression that applies both to classical regression approaches as well as to neural networks.
arXiv Detail & Related papers (2024-07-18T16:42:10Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Controlled Dropout for Uncertainty Estimation [11.225333867982359]
Uncertainty in a neural network is one of the most discussed topics for safety-critical applications.
We present a new version of the traditional dropout layer where we are able to fix the number of dropout configurations.
arXiv Detail & Related papers (2022-05-06T09:48:11Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Uncertainty Aware Semi-Supervised Learning on Graph Data [18.695343563823798]
We propose a multi-source uncertainty framework using a graph neural network (GNN) for node classification predictions.
By collecting evidence from the labels of training nodes, the Graph-based Kernel Dirichlet distribution Estimation (GKDE) method is designed for accurately predicting node-level Dirichlet distributions.
We found that dissonance-based detection yielded the best results on misclassification detection while vacuity-based detection was the best for OOD detection.
arXiv Detail & Related papers (2020-10-24T04:56:46Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.