Debiased Graph Contrastive Learning
- URL: http://arxiv.org/abs/2110.02027v1
- Date: Tue, 5 Oct 2021 13:15:59 GMT
- Title: Debiased Graph Contrastive Learning
- Authors: Jun Xia, Lirong Wu, Jintao Chen, Ge Wang, Stan Z.Li
- Abstract summary: We propose a novel and effective method to estimate the probability whether each negative sample is true or not.
Debiased Graph Contrastive Learning (DGCL) outperforms or matches previous unsupervised state-of-the-art results on several benchmarks.
- Score: 27.560217866753938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning (CL) has emerged as a dominant technique for
unsupervised representation learning which embeds augmented versions of the
anchor close to each other (positive samples) and pushes the embeddings of
other samples (negative samples) apart. As revealed in recent works, CL can
benefit from hard negative samples (negative samples that are difficult to
distinguish from the anchor). However, we observe minor improvement or even
performance drop when we adopt existing hard negative mining techniques in
Graph Contrastive Learning (GCL). We find that many hard negative samples
similar to anchor point are false negative ones (samples from the same class as
anchor point) in GCL, which is different from CL in computer vision and will
lead to unsatisfactory performance of existing hard negative mining techniques
in GCL. To eliminate this bias, we propose Debiased Graph Contrastive Learning
(DGCL), a novel and effective method to estimate the probability whether each
negative sample is true or not. With this probability, we devise two schemes
(i.e., DGCL-weight and DGCL-mix) to boost the performance of GCL. Empirically,
DGCL outperforms or matches previous unsupervised state-of-the-art results on
several benchmarks and even exceeds the performance of supervised ones.
Related papers
- Decoupled Contrastive Learning for Long-Tailed Recognition [58.255966442426484]
Supervised Contrastive Loss (SCL) is popular in visual representation learning.
In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance.
We propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes.
arXiv Detail & Related papers (2024-03-10T09:46:28Z) - Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Graph Ranking Contrastive Learning: A Extremely Simple yet Efficient Method [17.760628718072144]
InfoNCE uses augmentation techniques to obtain two views, where a node in one view acts as the anchor, the corresponding node in the other view serves as the positive sample, and all other nodes are regarded as negative samples.
The goal is to minimize the distance between the anchor node and positive samples and maximize the distance to negative samples.
Due to the lack of label information during training, InfoNCE inevitably treats samples from the same class as negative samples, leading to the issue of false negative samples.
We propose GraphRank, a simple yet efficient graph contrastive learning method that addresses the problem of false negative samples
arXiv Detail & Related papers (2023-10-23T03:15:57Z) - Affinity Uncertainty-based Hard Negative Mining in Graph Contrastive
Learning [27.728860211993368]
Hard negative mining has shown effective in enhancing self-supervised contrastive learning (CL) on diverse data types.
This article proposes a novel approach that builds a discriminative model on collective affinity information to mine hard negatives in graph data.
Experiments on ten graph datasets show that our approach consistently enhances different state-of-the-art (SOTA) GCL methods in both graph and node classification tasks.
arXiv Detail & Related papers (2023-01-31T00:18:03Z) - Supervised Contrastive Learning with Hard Negative Samples [16.42457033976047]
In contrastive learning (CL) learns a useful representation function by pulling positive samples close to each other.
In absence of class information, negative samples are chosen randomly and independently of the anchor.
Supervised CL (SCL) avoids this class collision by conditioning the negative sampling distribution to samples having labels different from that of the anchor.
arXiv Detail & Related papers (2022-08-31T19:20:04Z) - Adversarial Contrastive Learning via Asymmetric InfoNCE [64.42740292752069]
We propose to treat adversarial samples unequally when contrasted with an asymmetric InfoNCE objective.
In the asymmetric fashion, the adverse impacts of conflicting objectives between CL and adversarial learning can be effectively mitigated.
Experiments show that our approach consistently outperforms existing Adversarial CL methods.
arXiv Detail & Related papers (2022-07-18T04:14:36Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.