Enhancing News Recommendation with Hierarchical LLM Prompting
- URL: http://arxiv.org/abs/2504.20452v1
- Date: Tue, 29 Apr 2025 06:02:16 GMT
- Title: Enhancing News Recommendation with Hierarchical LLM Prompting
- Authors: Hai-Dang Kieu, Delvin Ce Zhang, Minh Duc Nguyen, Min Xu, Qiang Wu, Dung D. Le,
- Abstract summary: We introduce PNR-LLM, for Large Language Models for Personalized News Recommendation.<n>PNR-LLM harnesses the generation capabilities of LLMs to enrich news titles and abstracts.<n>We propose an attention mechanism to aggregate enriched semantic- and entity-level data, forming unified user and news embeddings.
- Score: 17.481812986550633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized news recommendation systems often struggle to effectively capture the complexity of user preferences, as they rely heavily on shallow representations, such as article titles and abstracts. To address this problem, we introduce a novel method, namely PNR-LLM, for Large Language Models for Personalized News Recommendation. Specifically, PNR-LLM harnesses the generation capabilities of LLMs to enrich news titles and abstracts, and consequently improves recommendation quality. PNR-LLM contains a novel module, News Enrichment via LLMs, which generates deeper semantic information and relevant entities from articles, transforming shallow contents into richer representations. We further propose an attention mechanism to aggregate enriched semantic- and entity-level data, forming unified user and news embeddings that reveal a more accurate user-news match. Extensive experiments on MIND datasets show that PNR-LLM outperforms state-of-the-art baselines. Moreover, the proposed data enrichment module is model-agnostic, and we empirically show that applying our proposed module to multiple existing models can further improve their performance, verifying the advantage of our design.
Related papers
- Revisiting Language Models in Neural News Recommender Systems [48.372289545886495]
Neural news recommender systems (RSs) have integrated language models (LMs) to encode news articles with rich textual information into representations.
Most studies suggest that (i) news RSs achieve better performance with larger pre-trained language models (PLMs) than shallow language models (SLMs)
This paper revisits, unify, and extend these comparisons of the effectiveness of LMs in news RSs using the real-world MIND dataset.
arXiv Detail & Related papers (2025-01-20T10:35:36Z) - Personalized News Recommendation System via LLM Embedding and Co-Occurrence Patterns [6.4561443264763625]
In news recommendation (NR), systems must comprehend and process a vast amount of clicked news text to infer the probability of candidate news clicks.
In this paper, we propose a novel NR algorithm to reshape the news model via LLM Embedding and Co-Occurrence Pattern (LECOP)
Extensive experiments demonstrate the superior performance of our proposed novel method.
arXiv Detail & Related papers (2024-11-09T03:01:49Z) - Beyond Retrieval: Generating Narratives in Conversational Recommender Systems [4.73461454584274]
We introduce a new dataset (REGEN) for natural language generation tasks in conversational recommendations.
We establish benchmarks using well-known generative metrics, and perform an automated evaluation of the new dataset using a rater LLM.
And to the best of our knowledge, represents the first attempt to analyze the capabilities of LLMs in understanding recommender signals and generating rich narratives.
arXiv Detail & Related papers (2024-10-22T07:53:41Z) - LLMEmb: Large Language Model Can Be a Good Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the ability to capture semantic relationships between items, independent of their popularity.
We introduce LLMEmb, a novel method leveraging LLM to generate item embeddings that enhance Sequential Recommender Systems (SRS) performance.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - DaRec: A Disentangled Alignment Framework for Large Language Model and Recommender System [83.34921966305804]
Large language models (LLMs) have demonstrated remarkable performance in recommender systems.<n>We propose a novel plug-and-play alignment framework for LLMs and collaborative models.<n>Our method is superior to existing state-of-the-art algorithms.
arXiv Detail & Related papers (2024-08-15T15:56:23Z) - MMREC: LLM Based Multi-Modal Recommender System [2.3113916776957635]
This paper presents a novel approach to enhancing recommender systems by leveraging Large Language Models (LLMs) and deep learning techniques.
The proposed framework aims to improve the accuracy and relevance of recommendations by incorporating multi-modal information processing and by the use of unified latent space representation.
arXiv Detail & Related papers (2024-08-08T04:31:29Z) - News Recommendation with Category Description by a Large Language Model [1.6267479602370543]
News categories, such as tv-golden-globe, finance-real-estate, and news-politics, play an important role in understanding news content.
We propose a novel method that automatically generates informative category descriptions using a large language model.
Our method successfully achieved 5.8% improvement at most in AUC compared with baseline approaches.
arXiv Detail & Related papers (2024-05-13T08:53:43Z) - Representation Learning with Large Language Models for Recommendation [33.040389989173825]
We propose a model-agnostic framework RLMRec to enhance recommenders with large language models (LLMs)empowered representation learning.<n>RLMRec incorporates auxiliary textual signals, develops a user/item profiling paradigm empowered by LLMs, and aligns the semantic space of LLMs with the representation space of collaborative relational signals.
arXiv Detail & Related papers (2023-10-24T15:51:13Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Why Do We Click: Visual Impression-aware News Recommendation [108.73539346064386]
This work is inspired by the fact that users make their click decisions mostly based on the visual impression they perceive when browsing news.
We propose to capture such visual impression information with visual-semantic modeling for news recommendation.
In addition, we inspect the impression from a global view and take structural information, such as the arrangement of different fields and spatial position of different words on the impression.
arXiv Detail & Related papers (2021-09-26T16:58:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.