ONCE: Boosting Content-based Recommendation with Both Open- and
Closed-source Large Language Models
- URL: http://arxiv.org/abs/2305.06566v4
- Date: Thu, 31 Aug 2023 13:43:43 GMT
- Title: ONCE: Boosting Content-based Recommendation with Both Open- and
Closed-source Large Language Models
- Authors: Qijiong Liu, Nuo Chen, Tetsuya Sakai, Xiao-Ming Wu
- Abstract summary: Large language models (LLMs) possess deep semantic comprehension and extensive knowledge from pretraining.
We explore the potential of leveraging both open- and closed-source LLMs to enhance content-based recommendation.
We observed a significant relative improvement of up to 19.32% compared to existing state-of-the-art recommendation models.
- Score: 39.193602991105
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized content-based recommender systems have become indispensable
tools for users to navigate through the vast amount of content available on
platforms like daily news websites and book recommendation services. However,
existing recommenders face significant challenges in understanding the content
of items. Large language models (LLMs), which possess deep semantic
comprehension and extensive knowledge from pretraining, have proven to be
effective in various natural language processing tasks. In this study, we
explore the potential of leveraging both open- and closed-source LLMs to
enhance content-based recommendation. With open-source LLMs, we utilize their
deep layers as content encoders, enriching the representation of content at the
embedding level. For closed-source LLMs, we employ prompting techniques to
enrich the training data at the token level. Through comprehensive experiments,
we demonstrate the high effectiveness of both types of LLMs and show the
synergistic relationship between them. Notably, we observed a significant
relative improvement of up to 19.32% compared to existing state-of-the-art
recommendation models. These findings highlight the immense potential of both
open- and closed-source of LLMs in enhancing content-based recommendation
systems. We will make our code and LLM-generated data available for other
researchers to reproduce our results.
Related papers
- HLLM: Enhancing Sequential Recommendations via Hierarchical Large Language Models for Item and User Modeling [21.495443162191332]
Large Language Models (LLMs) have achieved remarkable success in various fields, prompting several studies to explore their potential in recommendation systems.
We propose a novel Hierarchical Large Language Model (HLLM) architecture designed to enhance sequential recommendation systems.
HLLM achieves excellent scalability, with the largest configuration utilizing 7B parameters for both item feature extraction and user interest modeling.
arXiv Detail & Related papers (2024-09-19T13:03:07Z) - Do Large Language Models Need a Content Delivery Network? [4.816440228214873]
We envision a Knowledge Delivery Network (KDN) that dynamically optimize the storage, transfer, and composition of KV cache across LLM engines and other compute and storage resources.
We have open-sourced a KDN prototype at https://github.com/LMCache/LMCache.
arXiv Detail & Related papers (2024-09-16T18:46:24Z) - MMREC: LLM Based Multi-Modal Recommender System [2.3113916776957635]
This paper presents a novel approach to enhancing recommender systems by leveraging Large Language Models (LLMs) and deep learning techniques.
The proposed framework aims to improve the accuracy and relevance of recommendations by incorporating multi-modal information processing and by the use of unified latent space representation.
arXiv Detail & Related papers (2024-08-08T04:31:29Z) - MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series [86.31735321970481]
We open-source MAP-Neo, a bilingual language model with 7B parameters trained from scratch on 4.5T high-quality tokens.
Our MAP-Neo is the first fully open-sourced bilingual LLM with comparable performance compared to existing state-of-the-art LLMs.
arXiv Detail & Related papers (2024-05-29T17:57:16Z) - Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Large Language Models (LLMs) pretrained on massive text corpus presents a promising avenue for enhancing recommender systems.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Augmented Large Language Models with Parametric Knowledge Guiding [72.71468058502228]
Large Language Models (LLMs) have significantly advanced natural language processing (NLP) with their impressive language understanding and generation capabilities.
Their performance may be suboptimal for domain-specific tasks that require specialized knowledge due to limited exposure to the related data.
We propose the novel Parametric Knowledge Guiding (PKG) framework, which equips LLMs with a knowledge-guiding module to access relevant knowledge.
arXiv Detail & Related papers (2023-05-08T15:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.