CUQ-GNN: Committee-based Graph Uncertainty Quantification using Posterior Networks
- URL: http://arxiv.org/abs/2409.04159v1
- Date: Fri, 6 Sep 2024 09:43:09 GMT
- Title: CUQ-GNN: Committee-based Graph Uncertainty Quantification using Posterior Networks
- Authors: Clemens Damke, Eyke Hüllermeier,
- Abstract summary: We study the influence of domain-specific characteristics when defining a meaningful notion of predictive uncertainty on graph data.
We propose a family of Committe-based Uncertainty Quantification Graph Neural Networks (CUQ-GNNs)
- Score: 21.602569813024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we study the influence of domain-specific characteristics when defining a meaningful notion of predictive uncertainty on graph data. Previously, the so-called Graph Posterior Network (GPN) model has been proposed to quantify uncertainty in node classification tasks. Given a graph, it uses Normalizing Flows (NFs) to estimate class densities for each node independently and converts those densities into Dirichlet pseudo-counts, which are then dispersed through the graph using the personalized Page-Rank algorithm. The architecture of GPNs is motivated by a set of three axioms on the properties of its uncertainty estimates. We show that those axioms are not always satisfied in practice and therefore propose the family of Committe-based Uncertainty Quantification Graph Neural Networks (CUQ-GNNs), which combine standard Graph Neural Networks with the NF-based uncertainty estimation of Posterior Networks (PostNets). This approach adapts more flexibly to domain-specific demands on the properties of uncertainty estimates. We compare CUQ-GNN against GPN and other uncertainty quantification approaches on common node classification benchmarks and show that it is effective at producing useful uncertainty estimates.
Related papers
- Conditional Uncertainty Quantification for Tensorized Topological Neural Networks [19.560300212956747]
Graph Neural Networks (GNNs) have become the de facto standard for analyzing graph-structured data.
Recent studies have raised concerns about the statistical reliability of uncertainty estimates produced by GNNs.
This paper introduces a novel technique for quantifying uncertainty in non-exchangeable graph-structured data.
arXiv Detail & Related papers (2024-10-20T01:03:40Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - When Do We Need Graph Neural Networks for Node Classification? [38.68793097833027]
Graph Neural Networks (GNNs) extend basic Neural Networks (NNs)
In some cases, GNNs have little performance gain or even underperform graph-agnostic NNs.
arXiv Detail & Related papers (2022-10-30T23:10:23Z) - A Graph Is More Than Its Nodes: Towards Structured Uncertainty-Aware
Learning on Graphs [49.76175970328538]
We propose novel edgewise metrics, namely the edgewise expected calibration error (ECE) and the agree/disagree ECEs, which provide criteria for uncertainty estimation on graphs beyond the nodewise setting.
Our experiments demonstrate that the proposed edgewise metrics can complement the nodewise results and yield additional insights.
arXiv Detail & Related papers (2022-10-27T16:12:58Z) - Stability of Aggregation Graph Neural Networks [153.70485149740608]
We study the stability properties of aggregation graph neural networks (Agg-GNNs) considering perturbations of the underlying graph.
We prove that the stability bounds are defined by the properties of the filters in the first layer of the CNN that acts on each node.
We also conclude that in Agg-GNNs the selectivity of the mapping operators is tied to the properties of the filters only in the first layer of the CNN stage.
arXiv Detail & Related papers (2022-07-08T03:54:52Z) - Graph Posterior Network: Bayesian Predictive Uncertainty for Node
Classification [37.86338466089894]
Uncertainty estimation for non-independent node-level predictions is under-explored.
We propose a new model Graph Posterior Network (GPN) which explicitly performs Bayesian posterior updates for predictions on nodes.
GPN outperforms existing approaches for uncertainty estimation in the experiments.
arXiv Detail & Related papers (2021-10-26T20:41:20Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.