On the Equivalence of Graph Convolution and Mixup
- URL: http://arxiv.org/abs/2310.00183v1
- Date: Fri, 29 Sep 2023 23:09:54 GMT
- Title: On the Equivalence of Graph Convolution and Mixup
- Authors: Xiaotian Han, Hanqing Zeng, Yu Chen, Shaoliang Nie, Jingzhou Liu,
Kanika Narang, Zahra Shakeri, Karthik Abinav Sankararaman, Song Jiang, Madian
Khabsa, Qifan Wang, Xia Hu
- Abstract summary: This paper investigates the relationship between graph convolution and Mixup techniques.
Under two mild conditions, graph convolution can be viewed as a specialized form of Mixup.
We establish this equivalence mathematically by demonstrating that graph convolution networks (GCN) and simplified graph convolution (SGC) can be expressed as a form of Mixup.
- Score: 71.8932383179048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the relationship between graph convolution and Mixup
techniques. Graph convolution in a graph neural network involves aggregating
features from neighboring samples to learn representative features for a
specific node or sample. On the other hand, Mixup is a data augmentation
technique that generates new examples by averaging features and one-hot labels
from multiple samples. One commonality between these techniques is their
utilization of information from multiple samples to derive feature
representation. This study aims to explore whether a connection exists between
these two approaches. Our investigation reveals that, under two mild
conditions, graph convolution can be viewed as a specialized form of Mixup that
is applied during both the training and testing phases. The two conditions are:
1) \textit{Homophily Relabel} - assigning the target node's label to all its
neighbors, and 2) \textit{Test-Time Mixup} - Mixup the feature during the test
time. We establish this equivalence mathematically by demonstrating that graph
convolution networks (GCN) and simplified graph convolution (SGC) can be
expressed as a form of Mixup. We also empirically verify the equivalence by
training an MLP using the two conditions to achieve comparable performance.
Related papers
- GDM: Dual Mixup for Graph Classification with Limited Supervision [27.8982897698616]
Graph Neural Networks (GNNs) require a large number of labeled graph samples to obtain good performance on the graph classification task.
The performance of GNNs degrades significantly as the number of labeled graph samples decreases.
We propose a novel mixup-based graph augmentation method to generate new labeled graph samples.
arXiv Detail & Related papers (2023-09-18T20:17:10Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Graph Mixup with Soft Alignments [49.61520432554505]
We study graph data augmentation by mixup, which has been used successfully on images.
We propose S-Mixup, a simple yet effective mixup method for graph classification by soft alignments.
arXiv Detail & Related papers (2023-06-11T22:04:28Z) - Beyond Homophily: Reconstructing Structure for Graph-agnostic Clustering [15.764819403555512]
It is impossible to first identify a graph as homophilic or heterophilic before a suitable GNN model can be found.
We propose a novel graph clustering method, which contains three key components: graph reconstruction, a mixed filter, and dual graph clustering network.
Our method dominates others on heterophilic graphs.
arXiv Detail & Related papers (2023-05-03T01:49:01Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - Effects of Graph Convolutions in Deep Networks [8.937905773981702]
We present a rigorous theoretical understanding of the effects of graph convolutions in multi-layer networks.
We show that a single graph convolution expands the regime of the distance between the means where multi-layer networks can classify the data.
We provide both theoretical and empirical insights into the performance of graph convolutions placed in different combinations among the layers of a network.
arXiv Detail & Related papers (2022-04-20T08:24:43Z) - G-Mixup: Graph Data Augmentation for Graph Classification [55.63157775049443]
Mixup has shown superiority in improving the generalization and robustness of neural networks by interpolating features and labels between two random samples.
We propose $mathcalG$-Mixup to augment graphs for graph classification by interpolating the generator (i.e., graphon) of different classes of graphs.
Experiments show that $mathcalG$-Mixup substantially improves the generalization and robustness of GNNs.
arXiv Detail & Related papers (2022-02-15T04:09:44Z) - Geometric graphs from data to aid classification tasks with graph
convolutional networks [0.0]
We show that, even if additional relational information is not available in the data set, one can improve classification by constructing geometric graphs from the features themselves.
The improvement in classification accuracy is maximized by graphs that capture sample similarity with relatively low edge density.
arXiv Detail & Related papers (2020-05-08T15:00:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.