Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations
- URL: http://arxiv.org/abs/2308.16505v3
- Date: Tue, 30 Jan 2024 03:17:26 GMT
- Title: Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations
- Authors: Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
- Abstract summary: We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
- Score: 53.76682562935373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender models excel at providing domain-specific item recommendations by
leveraging extensive user behavior data. Despite their ability to act as
lightweight domain experts, they struggle to perform versatile tasks such as
providing explanations and engaging in conversations. On the other hand, large
language models (LLMs) represent a significant step towards artificial general
intelligence, showcasing remarkable capabilities in instruction comprehension,
commonsense reasoning, and human interaction. However, LLMs lack the knowledge
of domain-specific item catalogs and behavioral patterns, particularly in areas
that diverge from general world knowledge, such as online e-commerce.
Finetuning LLMs for each domain is neither economic nor efficient.
In this paper, we bridge the gap between recommender models and LLMs,
combining their respective strengths to create a versatile and interactive
recommender system. We introduce an efficient framework called
\textbf{InteRecAgent}, which employs LLMs as the brain and recommender models
as tools. We first outline a minimal set of essential tools required to
transform LLMs into InteRecAgent. We then propose an efficient workflow within
InteRecAgent for task execution, incorporating key components such as memory
components, dynamic demonstration-augmented task planning, and reflection.
InteRecAgent enables traditional recommender systems, such as those ID-based
matrix factorization models, to become interactive systems with a natural
language interface through the integration of LLMs. Experimental results on
several public datasets show that InteRecAgent achieves satisfying performance
as a conversational recommender system, outperforming general-purpose LLMs. The
source code of InteRecAgent is released at https://aka.ms/recagent.
Related papers
- AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML [56.565200973244146]
Automated machine learning (AutoML) accelerates AI development by automating tasks in the development pipeline.
Recent works have started exploiting large language models (LLM) to lessen such burden.
This paper proposes AutoML-Agent, a novel multi-agent framework tailored for full-pipeline AutoML.
arXiv Detail & Related papers (2024-10-03T20:01:09Z) - Harnessing Multimodal Large Language Models for Multimodal Sequential Recommendation [21.281471662696372]
We propose the Multimodal Large Language Model-enhanced Multimodaln Sequential Recommendation (MLLM-MSR) model.
To capture the dynamic user preference, we design a two-stage user preference summarization method.
We then employ a recurrent user preference summarization generation paradigm to capture the dynamic changes in user preferences.
arXiv Detail & Related papers (2024-08-19T04:44:32Z) - UniMEL: A Unified Framework for Multimodal Entity Linking with Large Language Models [0.42832989850721054]
Multimodal Entities Linking (MEL) is a crucial task that aims at linking ambiguous mentions within multimodal contexts to referent entities in a multimodal knowledge base, such as Wikipedia.
Existing methods overcomplicate the MEL task and overlook the visual semantic information, which makes them costly and hard to scale.
We propose UniMEL, a unified framework which establishes a new paradigm to process multimodal entity linking tasks using Large Language Models.
arXiv Detail & Related papers (2024-07-23T03:58:08Z) - Let Me Do It For You: Towards LLM Empowered Recommendation via Tool Learning [57.523454568002144]
Large language models (LLMs) have shown capabilities in commonsense reasoning and leveraging external tools.
We introduce ToolRec, a framework for LLM-empowered recommendations via tool learning.
We formulate the recommendation process as a process aimed at exploring user interests in attribute granularity.
We consider two types of attribute-oriented tools: rank tools and retrieval tools.
arXiv Detail & Related papers (2024-05-24T00:06:54Z) - Small LLMs Are Weak Tool Learners: A Multi-LLM Agent [73.54562551341454]
Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs.
We propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer.
This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability.
arXiv Detail & Related papers (2024-01-14T16:17:07Z) - On Generative Agents in Recommendation [58.42840923200071]
Agent4Rec is a user simulator in recommendation based on Large Language Models.
Each agent interacts with personalized recommender models in a page-by-page manner.
arXiv Detail & Related papers (2023-10-16T06:41:16Z) - MLLM-DataEngine: An Iterative Refinement Approach for MLLM [62.30753425449056]
We propose a novel closed-loop system that bridges data generation, model training, and evaluation.
Within each loop, the MLLM-DataEngine first analyze the weakness of the model based on the evaluation results.
For targeting, we propose an Adaptive Bad-case Sampling module, which adjusts the ratio of different types of data.
For quality, we resort to GPT-4 to generate high-quality data with each given data type.
arXiv Detail & Related papers (2023-08-25T01:41:04Z) - Generative Multimodal Entity Linking [24.322540112710918]
Multimodal Entity Linking (MEL) is the task of mapping mentions with multimodal contexts to referent entities from a knowledge base.
Existing MEL methods mainly focus on designing complex multimodal interaction mechanisms and require fine-tuning all model parameters.
We propose GEMEL, a Generative Multimodal Entity Linking framework based on Large Language Models (LLMs)
Our framework is compatible with any off-the-shelf language model, paving the way towards an efficient and general solution.
arXiv Detail & Related papers (2023-06-22T07:57:19Z) - PALR: Personalization Aware LLMs for Recommendation [7.407353565043918]
PALR aims to combine user history behaviors (such as clicks, purchases, ratings, etc.) with large language models (LLMs) to generate user preferred items.
Our solution outperforms state-of-the-art models on various sequential recommendation tasks.
arXiv Detail & Related papers (2023-05-12T17:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.