Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications
- URL: http://arxiv.org/abs/2306.15963v2
- Date: Wed, 4 Oct 2023 12:33:08 GMT
- Title: Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications
- Authors: Xinyu Ma, Xu Chu, Yasha Wang, Yang Lin, Junfeng Zhao, Liantao Ma,
Wenwu Zhu
- Abstract summary: We propose a novel graph mixup algorithm called FGWMixup, which seeks a midpoint of source graphs in the Fused Gromov-Wasserstein metric space.
Experiments conducted on five datasets demonstrate that FGWMixup effectively improves the generalizability and robustness of GNNs.
- Score: 44.15102300886821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph data augmentation has shown superiority in enhancing generalizability
and robustness of GNNs in graph-level classifications. However, existing
methods primarily focus on the augmentation in the graph signal space and the
graph structure space independently, neglecting the joint interaction between
them. In this paper, we address this limitation by formulating the problem as
an optimal transport problem that aims to find an optimal inter-graph node
matching strategy considering the interactions between graph structures and
signals. To solve this problem, we propose a novel graph mixup algorithm called
FGWMixup, which seeks a midpoint of source graphs in the Fused
Gromov-Wasserstein (FGW) metric space. To enhance the scalability of our
method, we introduce a relaxed FGW solver that accelerates FGWMixup by
improving the convergence rate from $\mathcal{O}(t^{-1})$ to
$\mathcal{O}(t^{-2})$. Extensive experiments conducted on five datasets using
both classic (MPNNs) and advanced (Graphormers) GNN backbones demonstrate that
FGWMixup effectively improves the generalizability and robustness of GNNs.
Codes are available at https://github.com/ArthurLeoM/FGWMixup.
Related papers
- Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Graph Sparsification via Mixture of Graphs [67.40204130771967]
We introduce Mixture-of-Graphs (MoG) to dynamically select tailored pruning solutions for each node.
MoG incorporates multiple sparsifier experts, each characterized by unique sparsity levels and pruning criteria, and selects the appropriate experts for each node.
Experiments on four large-scale OGB datasets and two superpixel datasets, equipped with five GNNs, demonstrate that MoG identifies subgraphs at higher sparsity levels.
arXiv Detail & Related papers (2024-05-23T07:40:21Z) - FedGT: Federated Node Classification with Scalable Graph Transformer [27.50698154862779]
We propose a scalable textbfFederated textbfGraph textbfTransformer (textbfFedGT) in the paper.
FedGT computes clients' similarity based on the aligned global nodes with optimal transport.
arXiv Detail & Related papers (2024-01-26T21:02:36Z) - Graph Mixup with Soft Alignments [49.61520432554505]
We study graph data augmentation by mixup, which has been used successfully on images.
We propose S-Mixup, a simple yet effective mixup method for graph classification by soft alignments.
arXiv Detail & Related papers (2023-06-11T22:04:28Z) - A Generalization of ViT/MLP-Mixer to Graphs [32.86160915431453]
We introduce a new class of GNNs called Graph ViT/MLP-Mixer.
They capture long-range dependency and mitigate the issue of over-squashing.
They offer better speed and memory efficiency with a complexity linear to the number of nodes and edges.
arXiv Detail & Related papers (2022-12-27T03:27:46Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - G-Mixup: Graph Data Augmentation for Graph Classification [55.63157775049443]
Mixup has shown superiority in improving the generalization and robustness of neural networks by interpolating features and labels between two random samples.
We propose $mathcalG$-Mixup to augment graphs for graph classification by interpolating the generator (i.e., graphon) of different classes of graphs.
Experiments show that $mathcalG$-Mixup substantially improves the generalization and robustness of GNNs.
arXiv Detail & Related papers (2022-02-15T04:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.