gSASRec: Reducing Overconfidence in Sequential Recommendation Trained
with Negative Sampling
- URL: http://arxiv.org/abs/2308.07192v1
- Date: Mon, 14 Aug 2023 14:56:40 GMT
- Title: gSASRec: Reducing Overconfidence in Sequential Recommendation Trained
with Negative Sampling
- Authors: Aleksandr Petrov and Craig Macdonald
- Abstract summary: We show that models trained with negative sampling tend to overestimate the probabilities of positive interactions.
We propose a novel Generalised Binary Cross-Entropy Loss function (gBCE) and theoretically prove that it can mitigate overconfidence.
We show through detailed experiments on three datasets that gSASRec does not exhibit the overconfidence problem.
- Score: 67.71952251641545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A large catalogue size is one of the central challenges in training
recommendation models: a large number of items makes them memory and
computationally inefficient to compute scores for all items during training,
forcing these models to deploy negative sampling. However, negative sampling
increases the proportion of positive interactions in the training data, and
therefore models trained with negative sampling tend to overestimate the
probabilities of positive interactions a phenomenon we call overconfidence.
While the absolute values of the predicted scores or probabilities are not
important for the ranking of retrieved recommendations, overconfident models
may fail to estimate nuanced differences in the top-ranked items, resulting in
degraded performance. In this paper, we show that overconfidence explains why
the popular SASRec model underperforms when compared to BERT4Rec. This is
contrary to the BERT4Rec authors explanation that the difference in performance
is due to the bi-directional attention mechanism. To mitigate overconfidence,
we propose a novel Generalised Binary Cross-Entropy Loss function (gBCE) and
theoretically prove that it can mitigate overconfidence. We further propose the
gSASRec model, an improvement over SASRec that deploys an increased number of
negatives and the gBCE loss. We show through detailed experiments on three
datasets that gSASRec does not exhibit the overconfidence problem. As a result,
gSASRec can outperform BERT4Rec (e.g. +9.47% NDCG on the MovieLens-1M dataset),
while requiring less training time (e.g. -73% training time on MovieLens-1M).
Moreover, in contrast to BERT4Rec, gSASRec is suitable for large datasets that
contain more than 1 million items.
Related papers
- Turning Dross Into Gold Loss: is BERT4Rec really better than SASRec? [1.223779595809275]
Two state-of-the-art baselines are Transformer-based models SASRec and BERT4Rec.
In most of the publications, BERT4Rec achieves better performance than SASRec.
We show that SASRec could be effectively trained with negative sampling and still outperform BERT4Rec, but the number of negative examples should be much larger than one.
arXiv Detail & Related papers (2023-09-14T11:07:10Z) - Unsupervised Dense Retrieval with Relevance-Aware Contrastive
Pre-Training [81.3781338418574]
We propose relevance-aware contrastive learning.
We consistently improve the SOTA unsupervised Contriever model on the BEIR and open-domain QA retrieval benchmarks.
Our method can not only beat BM25 after further pre-training on the target corpus but also serves as a good few-shot learner.
arXiv Detail & Related papers (2023-06-05T18:20:27Z) - Improving Sequential Recommendation Models with an Enhanced Loss
Function [9.573139673704766]
We develop an improved loss function for sequential recommendation models.
We conduct experiments on two influential open-source libraries.
We reproduce the results of the BERT4Rec model on the Beauty dataset.
arXiv Detail & Related papers (2023-01-03T07:18:54Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Adversarial Unlearning: Reducing Confidence Along Adversarial Directions [88.46039795134993]
We propose a complementary regularization strategy that reduces confidence on self-generated examples.
The method, which we call RCAD, aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss.
Despite its simplicity, we find on many classification benchmarks that RCAD can be added to existing techniques to increase test accuracy by 1-3% in absolute value.
arXiv Detail & Related papers (2022-06-03T02:26:24Z) - VSAC: Efficient and Accurate Estimator for H and F [68.65610177368617]
VSAC is a RANSAC-type robust estimator with a number of novelties.
It is significantly faster than all its predecessors and runs on average in 1-2 ms, on a CPU.
It is two orders of magnitude faster and yet as precise as MAGSAC++, the currently most accurate estimator of two-view geometry.
arXiv Detail & Related papers (2021-06-18T17:04:57Z) - Efficiently Teaching an Effective Dense Retriever with Balanced Topic
Aware Sampling [37.01593605084575]
TAS-Balanced is an efficient topic-aware query and balanced margin sampling technique.
We show that our TAS-Balanced training method achieves state-of-the-art low-latency (64ms per query) results on two TREC Deep Learning Track query sets.
arXiv Detail & Related papers (2021-04-14T16:49:18Z) - MixPUL: Consistency-based Augmentation for Positive and Unlabeled
Learning [8.7382177147041]
We propose a simple yet effective data augmentation method, coinedalgo, based on emphconsistency regularization.
algoincorporates supervised and unsupervised consistency training to generate augmented data.
We show thatalgoachieves an averaged improvement of classification error from 16.49 to 13.09 on the CIFAR-10 dataset across different positive data amount.
arXiv Detail & Related papers (2020-04-20T15:43:33Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.