Graph Self-Contrast Representation Learning
- URL: http://arxiv.org/abs/2309.02304v1
- Date: Tue, 5 Sep 2023 15:13:48 GMT
- Title: Graph Self-Contrast Representation Learning
- Authors: Minjie Chen, Yao Cheng, Ye Wang, Xiang Li, Ming Gao
- Abstract summary: We propose a novel graph self-contrast framework GraphSC.
It only uses one positive and one negative sample, and chooses triplet loss as the objective.
We conduct extensive experiments to evaluate the performance of GraphSC against 19 other state-of-the-art methods.
- Score: 14.519482062111507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning (GCL) has recently emerged as a promising approach
for graph representation learning. Some existing methods adopt the 1-vs-K
scheme to construct one positive and K negative samples for each graph, but it
is difficult to set K. For those methods that do not use negative samples, it
is often necessary to add additional strategies to avoid model collapse, which
could only alleviate the problem to some extent. All these drawbacks will
undoubtedly have an adverse impact on the generalizability and efficiency of
the model. In this paper, to address these issues, we propose a novel graph
self-contrast framework GraphSC, which only uses one positive and one negative
sample, and chooses triplet loss as the objective. Specifically, self-contrast
has two implications. First, GraphSC generates both positive and negative views
of a graph sample from the graph itself via graph augmentation functions of
various intensities, and use them for self-contrast. Second, GraphSC uses
Hilbert-Schmidt Independence Criterion (HSIC) to factorize the representations
into multiple factors and proposes a masked self-contrast mechanism to better
separate positive and negative samples. Further, Since the triplet loss only
optimizes the relative distance between the anchor and its positive/negative
samples, it is difficult to ensure the absolute distance between the anchor and
positive sample. Therefore, we explicitly reduced the absolute distance between
the anchor and positive sample to accelerate convergence. Finally, we conduct
extensive experiments to evaluate the performance of GraphSC against 19 other
state-of-the-art methods in both unsupervised and transfer learning settings.
Related papers
- Smoothed Graph Contrastive Learning via Seamless Proximity Integration [30.247207861739245]
Graph contrastive learning (GCL) aligns node representations by classifying node pairs into positives and negatives.
We present a Smoothed Graph Contrastive Learning model (SGCL) that injects proximity information associated with positive/negative pairs in the contrastive loss.
The proposed SGCL adjusts the penalties associated with node pairs in contrastive loss by incorporating three distinct smoothing techniques.
arXiv Detail & Related papers (2024-02-23T11:32:46Z) - Graph Ranking Contrastive Learning: A Extremely Simple yet Efficient Method [17.760628718072144]
InfoNCE uses augmentation techniques to obtain two views, where a node in one view acts as the anchor, the corresponding node in the other view serves as the positive sample, and all other nodes are regarded as negative samples.
The goal is to minimize the distance between the anchor node and positive samples and maximize the distance to negative samples.
Due to the lack of label information during training, InfoNCE inevitably treats samples from the same class as negative samples, leading to the issue of false negative samples.
We propose GraphRank, a simple yet efficient graph contrastive learning method that addresses the problem of false negative samples
arXiv Detail & Related papers (2023-10-23T03:15:57Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Efficient block contrastive learning via parameter-free meta-node
approximation [1.1470070927586016]
Sub-sampling is not optimal and incorrect negative sampling leads to sampling bias.
We propose a meta-node based approximation technique that can (a) proxy all negative combinations (b) in quadratic cluster size time complexity.
We show promising accuracy gains over state-of-the-art graph clustering on 6 benchmarks.
arXiv Detail & Related papers (2022-09-28T12:56:35Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - Generating Counterfactual Hard Negative Samples for Graph Contrastive
Learning [22.200011046576716]
Graph contrastive learning is a powerful tool for unsupervised graph representation learning.
Recent works usually sample negative samples from the same training batch with the positive samples, or from an external irrelevant graph.
We propose a novel method to utilize textbfCounterfactual mechanism to generate artificial hard negative samples for textbfContrastive learning.
arXiv Detail & Related papers (2022-07-01T02:19:59Z) - Prototypical Graph Contrastive Learning [141.30842113683775]
We propose a Prototypical Graph Contrastive Learning (PGCL) approach to mitigate the critical sampling bias issue.
Specifically, PGCL models the underlying semantic structure of the graph data via clustering semantically similar graphs into the same group, and simultaneously encourages the clustering consistency for different augmentations of the same graph.
For a query, PGCL further reweights its negative samples based on the distance between their prototypes (cluster centroids) and the query prototype.
arXiv Detail & Related papers (2021-06-17T16:45:31Z) - Provable Guarantees for Self-Supervised Deep Learning with Spectral
Contrastive Loss [72.62029620566925]
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm.
Our work analyzes contrastive learning without assuming conditional independence of positive pairs.
We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective.
arXiv Detail & Related papers (2021-06-08T07:41:02Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.