Addressing Cold Start For next-article Recommendation
- URL: http://arxiv.org/abs/2508.01036v1
- Date: Fri, 01 Aug 2025 19:28:57 GMT
- Title: Addressing Cold Start For next-article Recommendation
- Authors: Omar Elgohary, Nathan Jorgenson, Trenton Marple,
- Abstract summary: replication study modifies ALMM, the Adaptive Linear Mapping Model constructed for the next song recommendation, to the news recommendation problem on the MIND dataset.<n>Our replication aims to improve recommendation performance in cold-start scenarios by restructuring this model to align it with user reading patterns.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This replication study modifies ALMM, the Adaptive Linear Mapping Model constructed for the next song recommendation, to the news recommendation problem on the MIND dataset. The original version of ALMM computes latent representations for users, last-time items, and current items in a tensor factorization structure and learns a linear mapping from content features to latent item vectors. Our replication aims to improve recommendation performance in cold-start scenarios by restructuring this model to sequential news click behavior, viewing consecutively read articles as (last news, next news) tuples. Instead of the original audio features, we apply BERT and a TF-IDF (Term Frequency-Inverse Document Frequency) to news titles and abstracts to extract token contextualized representations and align them with triplet-based user reading patterns. We also propose a reproducibly thorough pre-processing pipeline combining news filtering and feature integrity validation. Our implementation of ALMM with TF-IDF shows relatively improved recommendation accuracy and robustness over Forbes and Oord baseline models in the cold-start scenario. We demonstrate that ALMM in a minimally modified state is not suitable for next news recommendation.
Related papers
- End-to-End Personalization: Unifying Recommender Systems with Large Language Models [0.0]
We propose a novel hybrid recommendation framework that combines Graph Attention Networks (GATs) with Large Language Models (LLMs)<n>LLMs are first used to enrich user and item representations by generating semantically meaningful profiles based on metadata such as titles, genres, and overviews.<n>We evaluate our model on benchmark datasets, including MovieLens 100k and 1M, where it consistently outperforms strong baselines.
arXiv Detail & Related papers (2025-08-02T22:46:50Z) - LLM2Rec: Large Language Models Are Powerful Embedding Models for Sequential Recommendation [49.78419076215196]
Sequential recommendation aims to predict users' future interactions by modeling collaborative filtering (CF) signals from historical behaviors of similar users or items.<n>Traditional sequential recommenders rely on ID-based embeddings, which capture CF signals through high-order co-occurrence patterns.<n>Recent advances in large language models (LLMs) have motivated text-based recommendation approaches that derive item representations from textual descriptions.<n>We argue that an ideal embedding model should seamlessly integrate CF signals with rich semantic representations to improve both in-domain and out-of-domain recommendation performance.
arXiv Detail & Related papers (2025-06-16T13:27:06Z) - Self-Boost via Optimal Retraining: An Analysis via Approximate Message Passing [58.52119063742121]
Retraining a model using its own predictions together with the original, potentially noisy labels is a well-known strategy for improving the model performance.<n>This paper addresses the question of how to optimally combine the model's predictions and the provided labels.<n>Our main contribution is the derivation of the Bayes optimal aggregator function to combine the current model's predictions and the given labels.
arXiv Detail & Related papers (2025-05-21T07:16:44Z) - Enhancing News Recommendation with Hierarchical LLM Prompting [17.481812986550633]
We introduce PNR-LLM, for Large Language Models for Personalized News Recommendation.<n>PNR-LLM harnesses the generation capabilities of LLMs to enrich news titles and abstracts.<n>We propose an attention mechanism to aggregate enriched semantic- and entity-level data, forming unified user and news embeddings.
arXiv Detail & Related papers (2025-04-29T06:02:16Z) - Aligning Large Language Models via Fine-grained Supervision [20.35000061196631]
Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations.
Current approaches focus on using reinforcement learning with human feedback to improve model alignment.
We propose a method to enhance LLM alignment through fine-grained token-level supervision.
arXiv Detail & Related papers (2024-06-04T20:21:45Z) - MMGRec: Multimodal Generative Recommendation with Transformer Model [81.61896141495144]
MMGRec aims to introduce a generative paradigm into multimodal recommendation.
We first devise a hierarchical quantization method Graph CF-RQVAE to assign Rec-ID for each item from its multimodal information.
We then train a Transformer-based recommender to generate the Rec-IDs of user-preferred items based on historical interaction sequences.
arXiv Detail & Related papers (2024-04-25T12:11:27Z) - Text Matching Improves Sequential Recommendation by Reducing Popularity
Biases [48.272381505993366]
TASTE verbalizes items and user-item interactions using identifiers and attributes of items.
Our experiments show that TASTE outperforms the state-of-the-art methods on widely used sequential recommendation datasets.
arXiv Detail & Related papers (2023-08-27T07:44:33Z) - Generating Query Focused Summaries without Fine-tuning the
Transformer-based Pre-trained Models [0.6124773188525718]
Fine-tuning the Natural Language Processing (NLP) models for each new data set requires higher computational time associated with increased carbon footprint and cost.
In this paper, we try to omit the fine-tuning steps and investigate whether the Marginal Maximum Relevance (MMR)-based approach can help the pre-trained models to obtain query-focused summaries directly from a new data set that was not used to pre-train the models.
As indicated by the experimental results, our MMR-based approach successfully ranked and selected the most relevant sentences as summaries and showed better performance than the individual pre-trained models.
arXiv Detail & Related papers (2023-03-10T22:40:15Z) - Sequential Recommendation with Auxiliary Item Relationships via
Multi-Relational Transformer [74.64431400185106]
We propose a Multi-relational Transformer capable of modeling auxiliary item relationships for Sequential Recommendation (SR)
Specifically, we propose a novel self-attention module, which incorporates arbitrary item relationships and weights item relationships accordingly.
Third, for inter-sequence item relationship pairs, we introduce a novel inter-sequence related items modeling module.
arXiv Detail & Related papers (2022-10-24T19:49:17Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.