Representation Learning with Large Language Models for Recommendation
- URL: http://arxiv.org/abs/2310.15950v4
- Date: Sun, 25 Feb 2024 05:44:27 GMT
- Title: Representation Learning with Large Language Models for Recommendation
- Authors: Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang,
Dawei Yin, Chao Huang
- Abstract summary: We propose a model-agnostic framework RLMRec to enhance recommenders with large language models (LLMs)empowered representation learning.
RLMRec incorporates auxiliary textual signals, develops a user/item profiling paradigm empowered by LLMs, and aligns the semantic space of LLMs with the representation space of collaborative relational signals.
- Score: 34.46344639742642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems have seen significant advancements with the influence of
deep learning and graph neural networks, particularly in capturing complex
user-item relationships. However, these graph-based recommenders heavily depend
on ID-based data, potentially disregarding valuable textual information
associated with users and items, resulting in less informative learned
representations. Moreover, the utilization of implicit feedback data introduces
potential noise and bias, posing challenges for the effectiveness of user
preference learning. While the integration of large language models (LLMs) into
traditional ID-based recommenders has gained attention, challenges such as
scalability issues, limitations in text-only reliance, and prompt input
constraints need to be addressed for effective implementation in practical
recommender systems. To address these challenges, we propose a model-agnostic
framework RLMRec that aims to enhance existing recommenders with LLM-empowered
representation learning. It proposes a recommendation paradigm that integrates
representation learning with LLMs to capture intricate semantic aspects of user
behaviors and preferences. RLMRec incorporates auxiliary textual signals,
develops a user/item profiling paradigm empowered by LLMs, and aligns the
semantic space of LLMs with the representation space of collaborative
relational signals through a cross-view alignment framework. This work further
establish a theoretical foundation demonstrating that incorporating textual
signals through mutual information maximization enhances the quality of
representations. In our evaluation, we integrate RLMRec with state-of-the-art
recommender models, while also analyzing its efficiency and robustness to noise
data. Our implementation codes are available at
https://github.com/HKUDS/RLMRec.
Related papers
- Real-Time Personalization for LLM-based Recommendation with Customized In-Context Learning [57.28766250993726]
This work explores adapting to dynamic user interests without any model updates.
Existing Large Language Model (LLM)-based recommenders often lose the in-context learning ability during recommendation tuning.
We propose RecICL, which customizes recommendation-specific in-context learning for real-time recommendations.
arXiv Detail & Related papers (2024-10-30T15:48:36Z) - Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - DaRec: A Disentangled Alignment Framework for Large Language Model and Recommender System [83.34921966305804]
Large language models (LLMs) have demonstrated remarkable performance in recommender systems.
We propose a novel plug-and-play alignment framework for LLMs and collaborative models.
Our method is superior to existing state-of-the-art algorithms.
arXiv Detail & Related papers (2024-08-15T15:56:23Z) - Beyond Inter-Item Relations: Dynamic Adaption for Enhancing LLM-Based Sequential Recommendation [83.87767101732351]
Sequential recommender systems (SRS) predict the next items that users may prefer based on user historical interaction sequences.
Inspired by the rise of large language models (LLMs) in various AI applications, there is a surge of work on LLM-based SRS.
We propose DARec, a sequential recommendation model built on top of coarse-grained adaption for capturing inter-item relations.
arXiv Detail & Related papers (2024-08-14T10:03:40Z) - MMREC: LLM Based Multi-Modal Recommender System [2.3113916776957635]
This paper presents a novel approach to enhancing recommender systems by leveraging Large Language Models (LLMs) and deep learning techniques.
The proposed framework aims to improve the accuracy and relevance of recommendations by incorporating multi-modal information processing and by the use of unified latent space representation.
arXiv Detail & Related papers (2024-08-08T04:31:29Z) - Enhancing Collaborative Semantics of Language Model-Driven Recommendations via Graph-Aware Learning [10.907949155931474]
Large Language Models (LLMs) are increasingly prominent in the recommendation systems domain.
Gal-Rec enhances the understanding of user-item collaborative semantics by imitating the intent of Graph Neural Networks (GNNs)
Gal-Rec significantly enhances the comprehension of collaborative semantics, and improves recommendation performance.
arXiv Detail & Related papers (2024-06-19T05:50:15Z) - TokenRec: Learning to Tokenize ID for LLM-based Generative Recommendation [16.93374578679005]
TokenRec is a novel framework for tokenizing and retrieving large-scale language models (LLMs) based Recommender Systems (RecSys)
Our strategy, Masked Vector-Quantized (MQ) Tokenizer, quantizes the masked user/item representations learned from collaborative filtering into discrete tokens.
Our generative retrieval paradigm is designed to efficiently recommend top-$K$ items for users to eliminate the need for auto-regressive decoding and beam search processes.
arXiv Detail & Related papers (2024-06-15T00:07:44Z) - Integrating Large Language Models with Graphical Session-Based
Recommendation [8.086277931395212]
We introduce large language models with graphical Session-Based recommendation, named LLMGR.
This framework bridges the gap by harmoniously integrating LLMs with Graph Neural Networks (GNNs) for SBR tasks.
This integration seeks to leverage the complementary strengths of LLMs in natural language understanding and GNNs in relational data processing.
arXiv Detail & Related papers (2024-02-26T12:55:51Z) - DRDT: Dynamic Reflection with Divergent Thinking for LLM-based
Sequential Recommendation [53.62727171363384]
We introduce a novel reasoning principle: Dynamic Reflection with Divergent Thinking.
Our methodology is dynamic reflection, a process that emulates human learning through probing, critiquing, and reflecting.
We evaluate our approach on three datasets using six pre-trained LLMs.
arXiv Detail & Related papers (2023-12-18T16:41:22Z) - Adapting LLMs for Efficient, Personalized Information Retrieval: Methods
and Implications [0.7832189413179361]
Large Language Models (LLMs) excel in comprehending and generating human-like text.
This paper explores strategies for integrating Language Models (LLMs) with Information Retrieval (IR) systems.
arXiv Detail & Related papers (2023-11-21T02:01:01Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.