From Overfitting to Robustness: Quantity, Quality, and Variety Oriented Negative Sample Selection in Graph Contrastive Learning
- URL: http://arxiv.org/abs/2406.15044v1
- Date: Fri, 21 Jun 2024 10:47:26 GMT
- Title: From Overfitting to Robustness: Quantity, Quality, and Variety Oriented Negative Sample Selection in Graph Contrastive Learning
- Authors: Adnan Ali, Jinlong Li, Huanhuan Chen, Ali Kashif Bashir,
- Abstract summary: Graph contrastive learning (GCL) aims to contrast positive-negative counterparts to learn the node embeddings.
The variation, quantity, and quality of negative samples compared to positive samples play crucial roles in learning meaningful embeddings for node classification downstream tasks.
This study proposes a novel Cumulative Sample Selection (CSS) algorithm by comprehensively considering negative samples' quality, variations, and quantity.
- Score: 38.87932592059369
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph contrastive learning (GCL) aims to contrast positive-negative counterparts to learn the node embeddings, whereas graph data augmentation methods are employed to generate these positive-negative samples. The variation, quantity, and quality of negative samples compared to positive samples play crucial roles in learning meaningful embeddings for node classification downstream tasks. Less variation, excessive quantity, and low-quality negative samples cause the model to be overfitted for particular nodes, resulting in less robust models. To solve the overfitting problem in the GCL paradigm, this study proposes a novel Cumulative Sample Selection (CSS) algorithm by comprehensively considering negative samples' quality, variations, and quantity. Initially, three negative sample pools are constructed: easy, medium, and hard negative samples, which contain 25%, 50%, and 25% of the total available negative samples, respectively. Then, 10% negative samples are selected from each of these three negative sample pools for training the model. After that, a decision agent module evaluates model training results and decides whether to explore more negative samples from three negative sample pools by increasing the ratio or keep exploiting the current sampling ratio. The proposed algorithm is integrated into a proposed graph contrastive learning framework named NegAmplify. NegAmplify is compared with the SOTA methods on nine graph node classification datasets, with seven achieving better node classification accuracy with up to 2.86% improvement.
Related papers
- Rethinking Samples Selection for Contrastive Learning: Mining of
Potential Samples [5.586563813796839]
Contrastive learning predicts whether two images belong to the same category by training a model to make their feature representations as close or as far away as possible.
We take into account both positive and negative samples, and mining potential samples from two aspects.
Our method achieves 88.57%, 61.10%, and 36.69% top-1 accuracy on CIFAR10, CIFAR100, and TinyImagenet, respectively.
arXiv Detail & Related papers (2023-11-01T08:08:06Z) - Graph Ranking Contrastive Learning: A Extremely Simple yet Efficient Method [17.760628718072144]
InfoNCE uses augmentation techniques to obtain two views, where a node in one view acts as the anchor, the corresponding node in the other view serves as the positive sample, and all other nodes are regarded as negative samples.
The goal is to minimize the distance between the anchor node and positive samples and maximize the distance to negative samples.
Due to the lack of label information during training, InfoNCE inevitably treats samples from the same class as negative samples, leading to the issue of false negative samples.
We propose GraphRank, a simple yet efficient graph contrastive learning method that addresses the problem of false negative samples
arXiv Detail & Related papers (2023-10-23T03:15:57Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Synthetic Hard Negative Samples for Contrastive Learning [8.776888865665024]
This paper proposes a novel feature-level method, namely sampling synthetic hard negative samples for contrastive learning (SSCL)
We generate more and harder negative samples by mixing negative samples, and then sample them by controlling the contrast of anchor sample with the other negative samples.
Our proposed method improves the classification performance on different image datasets and can be readily integrated into existing methods.
arXiv Detail & Related papers (2023-04-06T09:54:35Z) - Entity Aware Negative Sampling with Auxiliary Loss of False Negative
Prediction for Knowledge Graph Embedding [0.0]
We propose a novel method called Entity Aware Negative Sampling (EANS)
EANS is able to sample negative entities resemble to positive one by adopting Gaussian distribution to the aligned entity index space.
The proposed method can generate high-quality negative samples regardless of negative sample size and effectively mitigate the influence of false negative samples.
arXiv Detail & Related papers (2022-10-12T14:27:51Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.