Evaluating Robustness and Uncertainty of Graph Models Under Structural
Distributional Shifts
- URL: http://arxiv.org/abs/2302.13875v4
- Date: Wed, 1 Nov 2023 13:33:47 GMT
- Title: Evaluating Robustness and Uncertainty of Graph Models Under Structural
Distributional Shifts
- Authors: Gleb Bazhenov, Denis Kuznedelev, Andrey Malinin, Artem Babenko,
Liudmila Prokhorenkova
- Abstract summary: In node-level problems of graph learning, distributional shifts can be especially complex.
We propose a general approach for inducing diverse distributional shifts based on graph structure.
We show that simple models often outperform more sophisticated methods on the considered structural shifts.
- Score: 43.40315460712298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In reliable decision-making systems based on machine learning, models have to
be robust to distributional shifts or provide the uncertainty of their
predictions. In node-level problems of graph learning, distributional shifts
can be especially complex since the samples are interdependent. To evaluate the
performance of graph models, it is important to test them on diverse and
meaningful distributional shifts. However, most graph benchmarks considering
distributional shifts for node-level problems focus mainly on node features,
while structural properties are also essential for graph problems. In this
work, we propose a general approach for inducing diverse distributional shifts
based on graph structure. We use this approach to create data splits according
to several structural node properties: popularity, locality, and density. In
our experiments, we thoroughly evaluate the proposed distributional shifts and
show that they can be quite challenging for existing graph models. We also
reveal that simple models often outperform more sophisticated methods on the
considered structural shifts. Finally, our experiments provide evidence that
there is a trade-off between the quality of learned representations for the
base classification task under structural distributional shift and the ability
to separate the nodes from different distributions using these representations.
Related papers
- What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - Topology-Aware Dynamic Reweighting for Distribution Shifts on Graph [24.44321658238713]
Graph Neural Networks (GNNs) are widely used for node classification tasks but often fail to generalize when training and test nodes come from different distributions.
We introduce the Topology-Aware Dynamic Reweighting (TAR) framework, which dynamically adjusts sample weights through gradient flow in the Wasserstein space during training.
Our framework's superiority is demonstrated through standard testing on four graph OOD datasets and three class-imbalanced node classification datasets.
arXiv Detail & Related papers (2024-06-03T07:32:05Z) - Graphs Generalization under Distribution Shifts [11.963958151023732]
We introduce a novel framework, namely Graph Learning Invariant Domain genERation (GLIDER)
Our model outperforms baseline methods on node-level OOD generalization across domains in distribution shift on node features and topological structures simultaneously.
arXiv Detail & Related papers (2024-03-25T00:15:34Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Explaining and Adapting Graph Conditional Shift [28.532526595793364]
Graph Neural Networks (GNNs) have shown remarkable performance on graph-structured data.
Recent empirical studies suggest that GNNs are very susceptible to distribution shift.
arXiv Detail & Related papers (2023-06-05T21:17:48Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [77.14319095965058]
We formulate the OOD problem for node-level prediction on graphs.
We develop a new domain-invariant learning approach, named Explore-to-Extrapolate Risk Minimization.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Graph Mixture Density Networks [24.0362474769709]
We introduce the Graph Mixture Density Network, a new family of machine learning models that can fit multimodal output distributions conditioned on arbitrary input graphs.
We show that there is a significant improvement in the likelihood of an epidemic outcome when taking into account both multimodality and structure.
arXiv Detail & Related papers (2020-12-05T17:39:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.