ID-MixGCL: Identity Mixup for Graph Contrastive Learning
- URL: http://arxiv.org/abs/2304.10045v2
- Date: Wed, 17 Jan 2024 16:28:51 GMT
- Title: ID-MixGCL: Identity Mixup for Graph Contrastive Learning
- Authors: Gehang Zhang and Bowen Yu and Jiangxia Cao and Xinghua Zhang and
Jiawei Sheng and Chuan Zhou and Tingwen Liu
- Abstract summary: ID-MixGCL allows simultaneous datasets of input nodes and corresponding identity labels to obtain soft-confidence samples.
Results demonstrate that ID-MixGCL improves performance on graph classification and node classification tasks.
- Score: 22.486101865027678
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph contrastive learning (GCL) has recently achieved substantial
advancements. Existing GCL approaches compare two different ``views'' of the
same graph in order to learn node/graph representations. The underlying
assumption of these studies is that the graph augmentation strategy is capable
of generating several different graph views such that the graph views are
structurally different but semantically similar to the original graphs, and
thus the ground-truth labels of the original and augmented graph/nodes can be
regarded identical in contrastive learning. However, we observe that this
assumption does not always hold. For instance, the deletion of a super-node
within a social network can exert a substantial influence on the partitioning
of communities for other nodes. Similarly, any perturbation to nodes or edges
in a molecular graph will change the labels of the graph. Therefore, we believe
that augmenting the graph, accompanied by an adaptation of the labels used for
the contrastive loss, will facilitate the encoder to learn a better
representation. Based on this idea, we propose ID-MixGCL, which allows the
simultaneous interpolation of input nodes and corresponding identity labels to
obtain soft-confidence samples, with a controllable degree of change, leading
to the capture of fine-grained representations from self-supervised training on
unlabeled graphs. Experimental results demonstrate that ID-MixGCL improves
performance on graph classification and node classification tasks, as
demonstrated by significant improvements on the Cora, IMDB-B, IMDB-M, and
PROTEINS datasets compared to state-of-the-art techniques, by 3-29% absolute
points.
Related papers
- Graph Mixup with Soft Alignments [49.61520432554505]
We study graph data augmentation by mixup, which has been used successfully on images.
We propose S-Mixup, a simple yet effective mixup method for graph classification by soft alignments.
arXiv Detail & Related papers (2023-06-11T22:04:28Z) - Deep Graph-Level Clustering Using Pseudo-Label-Guided Mutual Information
Maximization Network [31.38584638254226]
We study the problem of partitioning a set of graphs into different groups such that the graphs in the same group are similar while the graphs in different groups are dissimilar.
To solve the problem, we propose a novel method called Deep Graph-Level Clustering (DGLC)
Our DGLC achieves graph-level representation learning and graph-level clustering in an end-to-end manner.
arXiv Detail & Related papers (2023-02-05T12:28:08Z) - Graph Soft-Contrastive Learning via Neighborhood Ranking [19.241089079154044]
Graph Contrastive Learning (GCL) has emerged as a promising approach in the realm of graph self-supervised learning.
We propose a novel paradigm, Graph Soft-Contrastive Learning (GSCL)
GSCL facilitates GCL via neighborhood ranking, avoiding the need to specify absolutely similar pairs.
arXiv Detail & Related papers (2022-09-28T09:52:15Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.