Topology Reorganized Graph Contrastive Learning with Mitigating Semantic Drift
- URL: http://arxiv.org/abs/2407.16726v1
- Date: Tue, 23 Jul 2024 13:55:33 GMT
- Title: Topology Reorganized Graph Contrastive Learning with Mitigating Semantic Drift
- Authors: Jiaqiang Zhang, Songcan Chen,
- Abstract summary: Graph contrastive learning (GCL) is an effective paradigm for node representation learning in graphs.
To increase the diversity of the contrastive view, we propose two simple and effective global topological augmentations to compensate current GCL.
- Score: 28.83750578838018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph contrastive learning (GCL) is an effective paradigm for node representation learning in graphs. The key components hidden behind GCL are data augmentation and positive-negative pair selection. Typical data augmentations in GCL, such as uniform deletion of edges, are generally blind and resort to local perturbation, which is prone to producing under-diversity views. Additionally, there is a risk of making the augmented data traverse to other classes. Moreover, most methods always treat all other samples as negatives. Such a negative pairing naturally results in sampling bias and likewise may make the learned representation suffer from semantic drift. Therefore, to increase the diversity of the contrastive view, we propose two simple and effective global topological augmentations to compensate current GCL. One is to mine the semantic correlation between nodes in the feature space. The other is to utilize the algebraic properties of the adjacency matrix to characterize the topology by eigen-decomposition. With the help of both, we can retain important edges to build a better view. To reduce the risk of semantic drift, a prototype-based negative pair selection is further designed which can filter false negative samples. Extensive experiments on various tasks demonstrate the advantages of the model compared to the state-of-the-art methods.
Related papers
- Negative-Free Self-Supervised Gaussian Embedding of Graphs [29.26519601854811]
Graph Contrastive Learning (GCL) has emerged as a promising graph self-supervised learning framework.
We propose a negative-free objective to achieve uniformity, inspired by the fact that points distributed according to a normalized isotropic Gaussian are uniformly spread across the unit hypersphere.
Our proposal achieves competitive performance with fewer parameters, shorter training times, and lower memory consumption compared to existing GCL methods.
arXiv Detail & Related papers (2024-11-02T07:04:40Z) - Smoothed Graph Contrastive Learning via Seamless Proximity Integration [35.73306919276754]
Graph contrastive learning (GCL) aligns node representations by classifying node pairs into positives and negatives.
We present a Smoothed Graph Contrastive Learning model (SGCL) that injects proximity information associated with positive/negative pairs in the contrastive loss.
The proposed SGCL adjusts the penalties associated with node pairs in the contrastive loss by incorporating three distinct smoothing techniques.
arXiv Detail & Related papers (2024-02-23T11:32:46Z) - Pseudo Contrastive Learning for Graph-based Semi-supervised Learning [67.37572762925836]
Pseudo Labeling is a technique used to improve the performance of Graph Neural Networks (GNNs)
We propose a general framework for GNNs, termed Pseudo Contrastive Learning (PCL)
arXiv Detail & Related papers (2023-02-19T10:34:08Z) - STERLING: Synergistic Representation Learning on Bipartite Graphs [78.86064828220613]
A fundamental challenge of bipartite graph representation learning is how to extract node embeddings.
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
We introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs.
arXiv Detail & Related papers (2023-01-25T03:21:42Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Heterogeneous Graph Contrastive Multi-view Learning [11.489983916543805]
Graph contrastive learning (GCL) has been developed to learn discriminative node representations on graph datasets.
We propose a novel Heterogeneous Graph Contrastive Multi-view Learning (HGCML) model.
HGCML consistently outperforms state-of-the-art baselines on five real-world benchmark datasets.
arXiv Detail & Related papers (2022-10-01T10:53:48Z) - Enhancing Graph Contrastive Learning with Node Similarity [4.60032347615771]
Graph contrastive learning (GCL) is a representative framework for self-supervised learning.
GCL learns node representations by contrasting semantically similar nodes (positive samples) and dissimilar nodes (negative samples) with anchor nodes.
We propose an enhanced objective that contains all positive samples and no false-negative samples.
arXiv Detail & Related papers (2022-08-13T22:49:20Z) - Prototypical Graph Contrastive Learning [141.30842113683775]
We propose a Prototypical Graph Contrastive Learning (PGCL) approach to mitigate the critical sampling bias issue.
Specifically, PGCL models the underlying semantic structure of the graph data via clustering semantically similar graphs into the same group, and simultaneously encourages the clustering consistency for different augmentations of the same graph.
For a query, PGCL further reweights its negative samples based on the distance between their prototypes (cluster centroids) and the query prototype.
arXiv Detail & Related papers (2021-06-17T16:45:31Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.