GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention
- URL: http://arxiv.org/abs/2402.07191v1
- Date: Sun, 11 Feb 2024 12:57:16 GMT
- Title: GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention
- Authors: Fangyu Ding, Haiyang Wang, Zhixuan Chu, Tianming Li, Zhaoping Hu,
Junchi Yan
- Abstract summary: Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
- Score: 52.67633391931959
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph invariant learning (GIL) has been an effective approach to discovering
the invariant relationships between graph data and its labels for different
graph learning tasks under various distribution shifts. Many recent endeavors
of GIL focus on extracting the invariant subgraph from the input graph for
prediction as a regularization strategy to improve the generalization
performance of graph learning. Despite their success, such methods also have
various limitations in obtaining their invariant subgraphs. In this paper, we
provide in-depth analyses of the drawbacks of existing works and propose
corresponding principles of our invariant subgraph extraction: 1) the sparsity,
to filter out the variant features, 2) the softness, for a broader solution
space, and 3) the differentiability, for a soundly end-to-end optimization. To
meet these principles in one shot, we leverage the Optimal Transport (OT)
theory and propose a novel graph attention mechanism called Graph Sinkhorn
Attention (GSINA). This novel approach serves as a powerful regularization
method for GIL tasks. By GSINA, we are able to obtain meaningful,
differentiable invariant subgraphs with controllable sparsity and softness.
Moreover, GSINA is a general graph learning framework that could handle GIL
tasks of multiple data grain levels. Extensive experiments on both synthetic
and real-world datasets validate the superiority of our GSINA, which
outperforms the state-of-the-art GIL methods by large margins on both
graph-level tasks and node-level tasks. Our code is publicly available at
\url{https://github.com/dingfangyu/GSINA}.
Related papers
- GALA: Graph Diffusion-based Alignment with Jigsaw for Source-free Domain Adaptation [13.317620250521124]
Source-free domain adaptation is a crucial machine learning topic, as it contains numerous applications in the real world.
Recent graph neural network (GNN) approaches can suffer from serious performance decline due to domain shift and label scarcity.
We propose a novel method named Graph Diffusion-based Alignment with Jigsaw (GALA), tailored for source-free graph domain adaptation.
arXiv Detail & Related papers (2024-10-22T01:32:46Z) - OpenGraph: Towards Open Graph Foundation Models [20.401374302429627]
Graph Neural Networks (GNNs) have emerged as promising techniques for encoding structural information.
Key challenge remains: the difficulty of generalizing to unseen graph data with different properties.
We propose a novel graph foundation model, called OpenGraph, to address this challenge.
arXiv Detail & Related papers (2024-03-02T08:05:03Z) - MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Mind the Label Shift of Augmentation-based Graph OOD Generalization [88.32356432272356]
LiSA exploits textbfLabel-textbfinvariant textbfSubgraphs of the training graphs to construct textbfAugmented environments.
LiSA generates diverse augmented environments with a consistent predictive relationship.
Experiments on node-level and graph-level OOD benchmarks show that LiSA achieves impressive generalization performance with different GNN backbones.
arXiv Detail & Related papers (2023-03-27T00:08:45Z) - Graph Contrastive Learning with Implicit Augmentations [36.57536688367965]
Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-07T17:34:07Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Source Free Unsupervised Graph Domain Adaptation [60.901775859601685]
Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification.
Most existing UGDA methods heavily rely on the labeled graph in the source domain.
In some real-world scenarios, the source graph is inaccessible because of privacy issues.
We propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA)
arXiv Detail & Related papers (2021-12-02T03:18:18Z) - Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming [48.99614465020678]
We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
arXiv Detail & Related papers (2021-11-20T22:45:53Z) - Topology-aware Tensor Decomposition for Meta-graph Learning [33.70569156426479]
A common approach for extracting useful information from heterogeneous graphs is to use meta-graphs.
We propose a new viewpoint from tensor on learning meta-graphs.
We also propose a topology-aware tensor decomposition, called TENSUS, that reflects the structure of DAGs.
arXiv Detail & Related papers (2021-01-04T16:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.