Individual and Structural Graph Information Bottlenecks for
Out-of-Distribution Generalization
- URL: http://arxiv.org/abs/2306.15902v1
- Date: Wed, 28 Jun 2023 03:52:41 GMT
- Title: Individual and Structural Graph Information Bottlenecks for
Out-of-Distribution Generalization
- Authors: Ling Yang, Jiayi Zheng, Heyuan Wang, Zhongyi Liu, Zhilin Huang, Shenda
Hong, Wentao Zhang, Bin Cui
- Abstract summary: We propose Individual Graph Information Bottleneck (I-GIB) and Structural Graph Information Bottleneck (S-GIB)
I-GIB discards irrelevant information by minimizing the mutual information between the input graph and its embeddings.
S-GIB simultaneously discards spurious features and learn invariant features from a high-order perspective.
- Score: 21.227825123510293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) graph generalization are critical for many
real-world applications. Existing methods neglect to discard spurious or noisy
features of inputs, which are irrelevant to the label. Besides, they mainly
conduct instance-level class-invariant graph learning and fail to utilize the
structural class relationships between graph instances. In this work, we
endeavor to address these issues in a unified framework, dubbed Individual and
Structural Graph Information Bottlenecks (IS-GIB). To remove class spurious
feature caused by distribution shifts, we propose Individual Graph Information
Bottleneck (I-GIB) which discards irrelevant information by minimizing the
mutual information between the input graph and its embeddings. To leverage the
structural intra- and inter-domain correlations, we propose Structural Graph
Information Bottleneck (S-GIB). Specifically for a batch of graphs with
multiple domains, S-GIB first computes the pair-wise input-input,
embedding-embedding, and label-label correlations. Then it minimizes the mutual
information between input graph and embedding pairs while maximizing the mutual
information between embedding and label pairs. The critical insight of S-GIB is
to simultaneously discard spurious features and learn invariant features from a
high-order perspective by maintaining class relationships under multiple
distributional shifts. Notably, we unify the proposed I-GIB and S-GIB to form
our complementary framework IS-GIB. Extensive experiments conducted on both
node- and graph-level tasks consistently demonstrate the superior
generalization ability of IS-GIB. The code is available at
https://github.com/YangLing0818/GraphOOD.
Related papers
- TGNN: A Joint Semi-supervised Framework for Graph-level Classification [34.300070497510276]
We propose a novel semi-supervised framework called Twin Graph Neural Network (TGNN)
To explore graph structural information from complementary views, our TGNN has a message passing module and a graph kernel module.
We evaluate our TGNN on various public datasets and show that it achieves strong performance.
arXiv Detail & Related papers (2023-04-23T15:42:11Z) - Feature propagation as self-supervision signals on graphs [0.0]
Regularized Graph Infomax (RGI) is a simple yet effective framework for node level self-supervised learning.
We show that RGI can achieve state-of-the-art performance regardless of its simplicity.
arXiv Detail & Related papers (2023-03-15T14:20:06Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Deconfounded Training for Graph Neural Networks [98.06386851685645]
We present a new paradigm of decon training (DTP) that better mitigates the confounding effect and latches on the critical information.
Specifically, we adopt the attention modules to disentangle the critical subgraph and trivial subgraph.
It allows GNNs to capture a more reliable subgraph whose relation with the label is robust across different distributions.
arXiv Detail & Related papers (2021-12-30T15:22:35Z) - Improving Subgraph Recognition with Variational Graph Information
Bottleneck [62.69606854404757]
Subgraph recognition aims at discovering a compressed substructure of a graph that is most informative to the graph property.
This paper introduces a noise injection method to compress the information in the subgraphs, which leads to a novel Variational Graph Information Bottleneck (VGIB) framework.
arXiv Detail & Related papers (2021-12-18T10:51:13Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - Graph Information Bottleneck for Subgraph Recognition [103.37499715761784]
We propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning.
Under this framework, one can recognize the maximally informative yet compressive subgraph, named IB-subgraph.
We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graph classification, graph interpretation and graph denoising.
arXiv Detail & Related papers (2020-10-12T09:32:20Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z) - Learning Graph Structure With A Finite-State Automaton Layer [31.028101360041227]
We study the problem of learning to derive abstract relations from the intrinsic graph structure.
We show how to learn these relations end-to-end by relaxing the problem into learning finite-state automata policies.
We demonstrate that this layer can find shortcuts in grid-world graphs and reproduce simple static analyses on Python programs.
arXiv Detail & Related papers (2020-07-09T17:01:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.