Neighborhood-based Hard Negative Mining for Sequential Recommendation
- URL: http://arxiv.org/abs/2306.10047v1
- Date: Mon, 12 Jun 2023 12:28:54 GMT
- Title: Neighborhood-based Hard Negative Mining for Sequential Recommendation
- Authors: Lu Fan, Jiashu Pu, Rongsheng Zhang, Xiao-Ming Wu
- Abstract summary: Negative sampling plays a crucial role in training successful sequential recommendation models.
We propose a Graph-based Negative sampling approach based on Neighborhood Overlap (GNNO) to exploit structural information hidden in user behaviors for negative mining.
- Score: 14.66576013324401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Negative sampling plays a crucial role in training successful sequential
recommendation models. Instead of merely employing random negative sample
selection, numerous strategies have been proposed to mine informative negative
samples to enhance training and performance. However, few of these approaches
utilize structural information. In this work, we observe that as training
progresses, the distributions of node-pair similarities in different groups
with varying degrees of neighborhood overlap change significantly, suggesting
that item pairs in distinct groups may possess different negative
relationships. Motivated by this observation, we propose a Graph-based Negative
sampling approach based on Neighborhood Overlap (GNNO) to exploit structural
information hidden in user behaviors for negative mining. GNNO first constructs
a global weighted item transition graph using training sequences. Subsequently,
it mines hard negative samples based on the degree of overlap with the target
item on the graph. Furthermore, GNNO employs curriculum learning to control the
hardness of negative samples, progressing from easy to difficult. Extensive
experiments on three Amazon benchmarks demonstrate GNNO's effectiveness in
consistently enhancing the performance of various state-of-the-art models and
surpassing existing negative sampling strategies. The code will be released at
\url{https://github.com/floatSDSDS/GNNO}.
Related papers
- From Overfitting to Robustness: Quantity, Quality, and Variety Oriented Negative Sample Selection in Graph Contrastive Learning [38.87932592059369]
Graph contrastive learning (GCL) aims to contrast positive-negative counterparts to learn the node embeddings.
The variation, quantity, and quality of negative samples compared to positive samples play crucial roles in learning meaningful embeddings for node classification downstream tasks.
This study proposes a novel Cumulative Sample Selection (CSS) algorithm by comprehensively considering negative samples' quality, variations, and quantity.
arXiv Detail & Related papers (2024-06-21T10:47:26Z) - Robust Training of Temporal GNNs using Nearest Neighbours based Hard
Negatives [44.49975766084011]
Training of temporal graph neural networks Tgnn is enumerated by random sampling based unsupervised loss.
We propose modified unsupervised learning of Tgnn, by replacing the uniform negative sampling with importance-based negative sampling.
We show that Tgnn trained using loss based on proposed negative sampling provides consistent superior performance.
arXiv Detail & Related papers (2024-02-14T15:27:53Z) - Graph Ranking Contrastive Learning: A Extremely Simple yet Efficient Method [17.760628718072144]
InfoNCE uses augmentation techniques to obtain two views, where a node in one view acts as the anchor, the corresponding node in the other view serves as the positive sample, and all other nodes are regarded as negative samples.
The goal is to minimize the distance between the anchor node and positive samples and maximize the distance to negative samples.
Due to the lack of label information during training, InfoNCE inevitably treats samples from the same class as negative samples, leading to the issue of false negative samples.
We propose GraphRank, a simple yet efficient graph contrastive learning method that addresses the problem of false negative samples
arXiv Detail & Related papers (2023-10-23T03:15:57Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.