Affinity Uncertainty-based Hard Negative Mining in Graph Contrastive
Learning
- URL: http://arxiv.org/abs/2301.13340v2
- Date: Sun, 7 Jan 2024 04:23:30 GMT
- Title: Affinity Uncertainty-based Hard Negative Mining in Graph Contrastive
Learning
- Authors: Chaoxi Niu, Guansong Pang, Ling Chen
- Abstract summary: Hard negative mining has shown effective in enhancing self-supervised contrastive learning (CL) on diverse data types.
This article proposes a novel approach that builds a discriminative model on collective affinity information to mine hard negatives in graph data.
Experiments on ten graph datasets show that our approach consistently enhances different state-of-the-art (SOTA) GCL methods in both graph and node classification tasks.
- Score: 27.728860211993368
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Hard negative mining has shown effective in enhancing self-supervised
contrastive learning (CL) on diverse data types, including graph CL (GCL). The
existing hardness-aware CL methods typically treat negative instances that are
most similar to the anchor instance as hard negatives, which helps improve the
CL performance, especially on image data. However, this approach often fails to
identify the hard negatives but leads to many false negatives on graph data.
This is mainly due to that the learned graph representations are not
sufficiently discriminative due to oversmooth representations and/or
non-independent and identically distributed (non-i.i.d.) issues in graph data.
To tackle this problem, this article proposes a novel approach that builds a
discriminative model on collective affinity information (i.e., two sets of
pairwise affinities between the negative instances and the anchor instance) to
mine hard negatives in GCL. In particular, the proposed approach evaluates how
confident/uncertain the discriminative model is about the affinity of each
negative instance to an anchor instance to determine its hardness weight
relative to the anchor instance. This uncertainty information is then
incorporated into the existing GCL loss functions via a weighting term to
enhance their performance. The enhanced GCL is theoretically grounded that the
resulting GCL loss is equivalent to a triplet loss with an adaptive margin
being exponentially proportional to the learned uncertainty of each negative
instance. Extensive experiments on ten graph datasets show that our approach
does the following: 1) consistently enhances different state-of-the-art (SOTA)
GCL methods in both graph and node classification tasks and 2) significantly
improves their robustness against adversarial attacks. Code is available at
https://github.com/mala-lab/AUGCL.
Related papers
- Topology Reorganized Graph Contrastive Learning with Mitigating Semantic Drift [28.83750578838018]
Graph contrastive learning (GCL) is an effective paradigm for node representation learning in graphs.
To increase the diversity of the contrastive view, we propose two simple and effective global topological augmentations to compensate current GCL.
arXiv Detail & Related papers (2024-07-23T13:55:33Z) - Decoupled Contrastive Learning for Long-Tailed Recognition [58.255966442426484]
Supervised Contrastive Loss (SCL) is popular in visual representation learning.
In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance.
We propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes.
arXiv Detail & Related papers (2024-03-10T09:46:28Z) - Smoothed Graph Contrastive Learning via Seamless Proximity Integration [35.73306919276754]
Graph contrastive learning (GCL) aligns node representations by classifying node pairs into positives and negatives.
We present a Smoothed Graph Contrastive Learning model (SGCL) that injects proximity information associated with positive/negative pairs in the contrastive loss.
The proposed SGCL adjusts the penalties associated with node pairs in the contrastive loss by incorporating three distinct smoothing techniques.
arXiv Detail & Related papers (2024-02-23T11:32:46Z) - Similarity Preserving Adversarial Graph Contrastive Learning [5.671825576834061]
We propose a similarity-preserving adversarial graph contrastive learning framework.
In this paper, we show that SP-AGCL achieves a competitive performance on several downstream tasks.
arXiv Detail & Related papers (2023-06-24T04:02:50Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - Graph Soft-Contrastive Learning via Neighborhood Ranking [19.241089079154044]
Graph Contrastive Learning (GCL) has emerged as a promising approach in the realm of graph self-supervised learning.
We propose a novel paradigm, Graph Soft-Contrastive Learning (GSCL)
GSCL facilitates GCL via neighborhood ranking, avoiding the need to specify absolutely similar pairs.
arXiv Detail & Related papers (2022-09-28T09:52:15Z) - Debiased Graph Contrastive Learning [27.560217866753938]
We propose a novel and effective method to estimate the probability whether each negative sample is true or not.
Debiased Graph Contrastive Learning (DGCL) outperforms or matches previous unsupervised state-of-the-art results on several benchmarks.
arXiv Detail & Related papers (2021-10-05T13:15:59Z) - Prototypical Graph Contrastive Learning [141.30842113683775]
We propose a Prototypical Graph Contrastive Learning (PGCL) approach to mitigate the critical sampling bias issue.
Specifically, PGCL models the underlying semantic structure of the graph data via clustering semantically similar graphs into the same group, and simultaneously encourages the clustering consistency for different augmentations of the same graph.
For a query, PGCL further reweights its negative samples based on the distance between their prototypes (cluster centroids) and the query prototype.
arXiv Detail & Related papers (2021-06-17T16:45:31Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.