Personalized News Recommendation System via LLM Embedding and Co-Occurrence Patterns
- URL: http://arxiv.org/abs/2411.06046v1
- Date: Sat, 09 Nov 2024 03:01:49 GMT
- Title: Personalized News Recommendation System via LLM Embedding and Co-Occurrence Patterns
- Authors: Zheng Li, Kai Zhange,
- Abstract summary: In news recommendation (NR), systems must comprehend and process a vast amount of clicked news text to infer the probability of candidate news clicks.
In this paper, we propose a novel NR algorithm to reshape the news model via LLM Embedding and Co-Occurrence Pattern (LECOP)
Extensive experiments demonstrate the superior performance of our proposed novel method.
- Score: 6.4561443264763625
- License:
- Abstract: In the past two years, large language models (LLMs) have achieved rapid development and demonstrated remarkable emerging capabilities. Concurrently, with powerful semantic understanding and reasoning capabilities, LLMs have significantly empowered the rapid advancement of the recommendation system field. Specifically, in news recommendation (NR), systems must comprehend and process a vast amount of clicked news text to infer the probability of candidate news clicks. This requirement exceeds the capabilities of traditional NR models but aligns well with the strengths of LLMs. In this paper, we propose a novel NR algorithm to reshape the news model via LLM Embedding and Co-Occurrence Pattern (LECOP). On one hand, we fintuned LLM by contrastive learning using large-scale datasets to encode news, which can fully explore the semantic information of news to thoroughly identify user preferences. On the other hand, we explored multiple co-occurrence patterns to mine collaborative information. Those patterns include news ID co-occurrence, Item-Item keywords co-occurrence and Intra-Item keywords co-occurrence. The keywords mentioned above are all generated by LLM. As far as we know, this is the first time that constructing such detailed Co-Occurrence Patterns via LLM to capture collaboration. Extensive experiments demonstrate the superior performance of our proposed novel method
Related papers
- Beyond Retrieval: Generating Narratives in Conversational Recommender Systems [4.912663905306209]
We introduce a new dataset (REGEN) for natural language generation tasks in conversational recommendations.
We establish benchmarks using well-known generative metrics, and perform an automated evaluation of the new dataset using a rater LLM.
And to the best of our knowledge, represents the first attempt to analyze the capabilities of LLMs in understanding recommender signals and generating rich narratives.
arXiv Detail & Related papers (2024-10-22T07:53:41Z) - CoRA: Collaborative Information Perception by Large Language Model's Weights for Recommendation [13.867950651601483]
Involving collaborative information in Large Language Models (LLMs) is a promising technique for adapting LLMs for recommendation.
Existing methods achieve this by concatenating collaborative features with text tokens into a unified sequence input.
We propose a new paradigm, textbfCollaborative textbfLoRA, with a collaborative query generator.
arXiv Detail & Related papers (2024-08-20T08:36:59Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Fine-tuning Multimodal Large Language Models for Product Bundling [53.01642741096356]
We introduce Bundle-MLLM, a novel framework that fine-tunes large language models (LLMs) through a hybrid item tokenization approach.
Specifically, we integrate textual, media, and relational data into a unified tokenization, introducing a soft separation token to distinguish between textual and non-textual tokens.
We propose a progressive optimization strategy that fine-tunes LLMs for disentangled objectives: 1) learning bundle patterns and 2) enhancing multimodal semantic understanding specific to product bundling.
arXiv Detail & Related papers (2024-07-16T13:30:14Z) - Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge [76.45868419402265]
multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets.
However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs.
This paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models, into MLLMs.
arXiv Detail & Related papers (2024-07-05T17:43:30Z) - LLM Attributor: Interactive Visual Attribution for LLM Generation [29.116016627864095]
Python library provides interactive visualizations for training data attribution of large language models.
Our library offers a new way to quickly attribute an LLM's text generation to training data points.
arXiv Detail & Related papers (2024-04-01T13:16:34Z) - CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation [60.2700801392527]
We introduce CoLLM, an innovative LLMRec methodology that seamlessly incorporates collaborative information into LLMs for recommendation.
CoLLM captures collaborative information through an external traditional model and maps it to the input token embedding space of LLM.
Extensive experiments validate that CoLLM adeptly integrates collaborative information into LLMs, resulting in enhanced recommendation performance.
arXiv Detail & Related papers (2023-10-30T12:25:00Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z) - Element-aware Summarization with Large Language Models: Expert-aligned
Evaluation and Chain-of-Thought Method [35.181659789684545]
Automatic summarization generates concise summaries that contain key ideas of source documents.
References from CNN/DailyMail and BBC XSum are noisy, mainly in terms of factual hallucination and information redundancy.
We propose a Summary Chain-of-Thought (SumCoT) technique to elicit LLMs to generate summaries step by step.
Experimental results show our method outperforms state-of-the-art fine-tuned PLMs and zero-shot LLMs by +4.33/+4.77 in ROUGE-L.
arXiv Detail & Related papers (2023-05-22T18:54:35Z) - Guiding Large Language Models via Directional Stimulus Prompting [114.84930073977672]
We introduce Directional Stimulus Prompting, a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.
Instead of directly adjusting LLMs, our method employs a small tunable policy model to generate an auxiliary directional stimulus prompt for each input instance.
arXiv Detail & Related papers (2023-02-22T17:44:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.