A Systematic Replicability and Comparative Study of BSARec and SASRec for Sequential Recommendation
- URL: http://arxiv.org/abs/2506.14692v1
- Date: Tue, 17 Jun 2025 16:29:55 GMT
- Title: A Systematic Replicability and Comparative Study of BSARec and SASRec for Sequential Recommendation
- Authors: Chiara D'Ercoli, Giulia Di Teodoro, Federico Siciliano,
- Abstract summary: This study aims at comparing two sequential recommender systems: Self-Attention based Sequential Recommendation (SASRec) and Beyond Self-Attention based Sequential Recommendation (BSARec)<n>The results displayed how BSARec, by including bias terms for frequency enhancement, does indeed outperform SASRec.
- Score: 0.9509625131289426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study aims at comparing two sequential recommender systems: Self-Attention based Sequential Recommendation (SASRec), and Beyond Self-Attention based Sequential Recommendation (BSARec) in order to check the improvement frequency enhancement - the added element in BSARec - has on recommendations. The models in the study, have been re-implemented with a common base-structure from EasyRec, with the aim of obtaining a fair and reproducible comparison. The results obtained displayed how BSARec, by including bias terms for frequency enhancement, does indeed outperform SASRec, although the increases in performance obtained, are not as high as those presented by the authors. This work aims at offering an overview on existing methods, and most importantly at underlying the importance of implementation details for performance comparison.
Related papers
- RewardBench 2: Advancing Reward Model Evaluation [71.65938693914153]
Reward models are used throughout the post-training of language models to capture nuanced signals from preference data.<n>The community has begun establishing best practices for evaluating reward models.<n>This paper introduces RewardBench 2, a new multi-skill reward modeling benchmark.
arXiv Detail & Related papers (2025-06-02T17:54:04Z) - REARANK: Reasoning Re-ranking Agent via Reinforcement Learning [69.8397511935806]
We present REARANK, a large language model (LLM)-based listwise reasoning reranking agent.<n>REARANK explicitly reasons before reranking, significantly improving both performance and interpretability.
arXiv Detail & Related papers (2025-05-26T14:31:48Z) - Benchmarking LLMs in Recommendation Tasks: A Comparative Evaluation with Conventional Recommenders [27.273217543282215]
We introduce RecBench, which evaluates two primary recommendation tasks, i.e., click-through rate prediction (CTR) and sequential recommendation (SeqRec)<n>Our experiments cover up to 17 large models and are conducted across five diverse datasets from fashion, news, video, books, and music domains.<n>Our findings indicate that LLM-based recommenders outperform conventional recommenders, achieving up to a 5% AUC improvement in the CTR scenario and up to a 170% NDCG@10 improvement in the SeqRec scenario.
arXiv Detail & Related papers (2025-03-07T15:05:23Z) - SAFERec: Self-Attention and Frequency Enriched Model for Next Basket Recommendation [41.94295877935867]
Transformer-based approaches such as BERT4Rec and SASRec demonstrate strong performance in Next Item Recommendation (NIR) tasks.<n>Applying these architectures to Next-Basket Recommendation (NBR) tasks is challenging due to the vast number of possible item combinations in a basket.<n>This paper introduces SAFERec, a novel algorithm for NBR that enhances transformer-based architectures from NIR by item frequency information.
arXiv Detail & Related papers (2024-12-18T20:10:42Z) - Revisiting BPR: A Replicability Study of a Common Recommender System Baseline [78.00363373925758]
We study the features of the BPR model, indicating their impact on its performance, and investigate open-source BPR implementations.
Our analysis reveals inconsistencies between these implementations and the original BPR paper, leading to a significant decrease in performance of up to 50% for specific implementations.
We show that the BPR model can achieve performance levels close to state-of-the-art methods on the top-n recommendation tasks and even outperform them on specific datasets.
arXiv Detail & Related papers (2024-09-21T18:39:53Z) - A Reproducible Analysis of Sequential Recommender Systems [13.987953631479662]
SequentialEnsurer Systems (SRSs) have emerged as a highly efficient approach to recommendation systems.
Existing works exhibit shortcomings in replicability of results, leading to inconsistent statements across papers.
Our work fills these gaps by standardising data pre-processing and model implementations.
arXiv Detail & Related papers (2024-08-07T16:23:29Z) - REBAR: Retrieval-Based Reconstruction for Time-series Contrastive Learning [64.08293076551601]
We propose a novel method of using a learned measure for identifying positive pairs.
Our Retrieval-Based Reconstruction measure measures the similarity between two sequences.
We show that the REBAR error is a predictor of mutual class membership.
arXiv Detail & Related papers (2023-11-01T13:44:45Z) - Supervised Advantage Actor-Critic for Recommender Systems [76.7066594130961]
We propose negative sampling strategy for training the RL component and combine it with supervised sequential learning.
Based on sampled (negative) actions (items), we can calculate the "advantage" of a positive action over the average case.
We instantiate SNQN and SA2C with four state-of-the-art sequential recommendation models and conduct experiments on two real-world datasets.
arXiv Detail & Related papers (2021-11-05T12:51:15Z) - Open-Set Recognition: A Good Closed-Set Classifier is All You Need [146.6814176602689]
We show that the ability of a classifier to make the 'none-of-above' decision is highly correlated with its accuracy on the closed-set classes.
We use this correlation to boost the performance of the cross-entropy OSR 'baseline' by improving its closed-set accuracy.
We also construct new benchmarks which better respect the task of detecting semantic novelty.
arXiv Detail & Related papers (2021-10-12T17:58:59Z) - SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval [11.38022203865326]
SPLADE model provides highly sparse representations and competitive results with respect to state-of-the-art dense and sparse approaches.
We modify the pooling mechanism, benchmark a model solely based on document expansion, and introduce models trained with distillation.
Overall, SPLADE is considerably improved with more than $9$% gains on NDCG@10 on TREC DL 2019, leading to state-of-the-art results on the BEIR benchmark.
arXiv Detail & Related papers (2021-09-21T10:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.