Evaluating Robustness and Uncertainty of Graph Models Under Structural
Distributional Shifts
- URL: http://arxiv.org/abs/2302.13875v4
- Date: Wed, 1 Nov 2023 13:33:47 GMT
- Title: Evaluating Robustness and Uncertainty of Graph Models Under Structural
Distributional Shifts
- Authors: Gleb Bazhenov, Denis Kuznedelev, Andrey Malinin, Artem Babenko,
Liudmila Prokhorenkova
- Abstract summary: In node-level problems of graph learning, distributional shifts can be especially complex.
We propose a general approach for inducing diverse distributional shifts based on graph structure.
We show that simple models often outperform more sophisticated methods on the considered structural shifts.
- Score: 43.40315460712298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In reliable decision-making systems based on machine learning, models have to
be robust to distributional shifts or provide the uncertainty of their
predictions. In node-level problems of graph learning, distributional shifts
can be especially complex since the samples are interdependent. To evaluate the
performance of graph models, it is important to test them on diverse and
meaningful distributional shifts. However, most graph benchmarks considering
distributional shifts for node-level problems focus mainly on node features,
while structural properties are also essential for graph problems. In this
work, we propose a general approach for inducing diverse distributional shifts
based on graph structure. We use this approach to create data splits according
to several structural node properties: popularity, locality, and density. In
our experiments, we thoroughly evaluate the proposed distributional shifts and
show that they can be quite challenging for existing graph models. We also
reveal that simple models often outperform more sophisticated methods on the
considered structural shifts. Finally, our experiments provide evidence that
there is a trade-off between the quality of learned representations for the
base classification task under structural distributional shift and the ability
to separate the nodes from different distributions using these representations.
Related papers
- DeCaf: A Causal Decoupling Framework for OOD Generalization on Node Classification [14.96980804513399]
Graph Neural Networks (GNNs) are susceptible to distribution shifts, creating vulnerability and security issues in critical domains.
Existing methods that target learning an invariant (feature, structure)-label mapping often depend on oversimplified assumptions about the data generation process.
We introduce a more realistic graph data generation model using Structural Causal Models (SCMs)
We propose a casual decoupling framework, DeCaf, that independently learns unbiased feature-label and structure-label mappings.
arXiv Detail & Related papers (2024-10-27T00:22:18Z) - A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - Graphs Generalization under Distribution Shifts [11.963958151023732]
We introduce a novel framework, namely Graph Learning Invariant Domain genERation (GLIDER)
Our model outperforms baseline methods on node-level OOD generalization across domains in distribution shift on node features and topological structures simultaneously.
arXiv Detail & Related papers (2024-03-25T00:15:34Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Explaining and Adapting Graph Conditional Shift [28.532526595793364]
Graph Neural Networks (GNNs) have shown remarkable performance on graph-structured data.
Recent empirical studies suggest that GNNs are very susceptible to distribution shift.
arXiv Detail & Related papers (2023-06-05T21:17:48Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Graph Mixture Density Networks [24.0362474769709]
We introduce the Graph Mixture Density Network, a new family of machine learning models that can fit multimodal output distributions conditioned on arbitrary input graphs.
We show that there is a significant improvement in the likelihood of an epidemic outcome when taking into account both multimodality and structure.
arXiv Detail & Related papers (2020-12-05T17:39:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.