Improving the Performance of Sequential Recommendation Systems with an Extended Large Language Model
- URL: http://arxiv.org/abs/2507.19990v1
- Date: Sat, 26 Jul 2025 15:59:25 GMT
- Title: Improving the Performance of Sequential Recommendation Systems with an Extended Large Language Model
- Authors: Sinnyum Choi, Woong Kim,
- Abstract summary: This study proposes to improve LLM-based recommendation systems by replacing Llama2 with Llama3 in the LlamaRec framework.<n>The experimental results show average performance improvements of 38.65%, 8.69%, and 8.19% for the ML-100K, Beauty, and Games datasets, respectively.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, competition in the field of artificial intelligence (AI) has intensified among major technological companies, resulting in the continuous release of new large-language models (LLMs) that exhibit improved language understanding and context-based reasoning capabilities. It is expected that these advances will enable more efficient personalized recommendations in LLM-based recommendation systems through improved quality of training data and architectural design. However, many studies have not considered these recent developments. In this study, it was proposed to improve LLM-based recommendation systems by replacing Llama2 with Llama3 in the LlamaRec framework. To ensure a fair comparison, random seed values were set and identical input data was provided during preprocessing and training. The experimental results show average performance improvements of 38.65\%, 8.69\%, and 8.19\% for the ML-100K, Beauty, and Games datasets, respectively, thus confirming the practicality of this method. Notably, the significant improvements achieved by model replacement indicate that the recommendation quality can be improved cost-effectively without the need to make structural changes to the system. Based on these results, it is our contention that the proposed approach is a viable solution for improving the performance of current recommendation systems.
Related papers
- Optimizers Qualitatively Alter Solutions And We Should Leverage This [62.662640460717476]
Deep Neural Networks (DNNs) can not guarantee convergence to a unique global minimum of the loss when using only local information, such as SGD.<n>We argue that the community should aim at understanding the biases of already existing methods, as well as aim to build new DNNs with the explicit intent of inducing certain properties of the solution.
arXiv Detail & Related papers (2025-07-16T13:33:31Z) - Efficient Code LLM Training via Distribution-Consistent and Diversity-Aware Data Selection [29.647174797769015]
We introduce an approach that utilizes a parametric model for code data selection, aimed at improving both training efficiency and model performance.<n>Our method achieves gains of 2.4% (HumanEval) and 2.3% (MBPP) over 92K full-sampled baseline, outperforming other sampling approaches in both performance and efficiency.
arXiv Detail & Related papers (2025-07-03T07:19:56Z) - OneRec Technical Report [65.24343832974165]
We propose OneRec, which reshapes the recommendation system through an end-to-end generative approach.<n>Firstly, we have enhanced the computational FLOPs of the current recommendation model by 10 $times$ and have identified the scaling laws for recommendations within certain boundaries.<n> Secondly, reinforcement learning techniques, previously difficult to apply for optimizing recommendations, show significant potential in this framework.
arXiv Detail & Related papers (2025-06-16T16:58:55Z) - Benchmarking LLMs in Recommendation Tasks: A Comparative Evaluation with Conventional Recommenders [27.273217543282215]
We introduce RecBench, which evaluates two primary recommendation tasks, i.e., click-through rate prediction (CTR) and sequential recommendation (SeqRec)<n>Our experiments cover up to 17 large models and are conducted across five diverse datasets from fashion, news, video, books, and music domains.<n>Our findings indicate that LLM-based recommenders outperform conventional recommenders, achieving up to a 5% AUC improvement in the CTR scenario and up to a 170% NDCG@10 improvement in the SeqRec scenario.
arXiv Detail & Related papers (2025-03-07T15:05:23Z) - Your Language Model May Think Too Rigidly: Achieving Reasoning Consistency with Symmetry-Enhanced Training [66.48331530995786]
We propose syMmetry-ENhanceD (MEND) Data Augmentation, a data-centric approach that improves the model's ability to extract useful information from context.<n>Unlike existing methods that emphasize reasoning chain augmentation, our approach improves model robustness at the knowledge extraction stage.<n>Experiments on both logical and arithmetic reasoning tasks show that MEND enhances reasoning performance across diverse query variations.
arXiv Detail & Related papers (2025-02-25T03:03:35Z) - Dynamic Noise Preference Optimization for LLM Self-Improvement via Synthetic Data [51.62162460809116]
We introduce Dynamic Noise Preference Optimization (DNPO) to ensure consistent improvements across iterations.<n>In experiments with Zephyr-7B, DNPO consistently outperforms existing methods, showing an average performance boost of 2.6%.<n> DNPO shows a significant improvement in model-generated data quality, with a 29.4% win-loss rate gap compared to the baseline in GPT-4 evaluations.
arXiv Detail & Related papers (2025-02-08T01:20:09Z) - Multi-Response Preference Optimization with Augmented Ranking Dataset [2.8973004951877357]
Preference optimization has played a significant role in improving the performance of Large Language Models.<n>We propose a novel approach to augment Preference Optimization datasets.<n>We also introduce a Multi-response-based Preference Optimization training method.
arXiv Detail & Related papers (2024-12-10T05:45:36Z) - Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning [71.2981957820888]
We propose a novel Star-Agents framework, which automates the enhancement of data quality across datasets.
The framework initially generates diverse instruction data with multiple LLM agents through a bespoke sampling method.
The generated data undergo a rigorous evaluation using a dual-model method that assesses both difficulty and quality.
arXiv Detail & Related papers (2024-11-21T02:30:53Z) - Achieving Peak Performance for Large Language Models: A Systematic Review [0.0]
Large language models (LLMs) have achieved remarkable success in natural language processing (NLP)
As models grow into the trillion- parameter range, computational and memory costs increase significantly.
This makes it difficult for many researchers to access the resources needed to train or apply these models.
arXiv Detail & Related papers (2024-09-07T13:57:41Z) - CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain
Performance and Calibration [59.48235003469116]
We show that data augmentation consistently enhances OOD performance.
We also show that CF augmented models which are easier to calibrate also exhibit much lower entropy when assigning importance.
arXiv Detail & Related papers (2023-09-14T16:16:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.