AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems
- URL: http://arxiv.org/abs/2310.09233v1
- Date: Fri, 13 Oct 2023 16:37:14 GMT
- Title: AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems
- Authors: Junjie Zhang, Yupeng Hou, Ruobing Xie, Wenqi Sun, Julian McAuley,
Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen
- Abstract summary: We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
- Score: 112.76941157194544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, there has been an emergence of employing LLM-powered agents as
believable human proxies, based on their remarkable decision-making capability.
However, existing studies mainly focus on simulating human dialogue. Human
non-verbal behaviors, such as item clicking in recommender systems, although
implicitly exhibiting user preferences and could enhance the modeling of users,
have not been deeply explored. The main reasons lie in the gap between language
modeling and behavior modeling, as well as the incomprehension of LLMs about
user-item relations.
To address this issue, we propose AgentCF for simulating user-item
interactions in recommender systems through agent-based collaborative
filtering. We creatively consider not only users but also items as agents, and
develop a collaborative learning approach that optimizes both kinds of agents
together. Specifically, at each time step, we first prompt the user and item
agents to interact autonomously. Then, based on the disparities between the
agents' decisions and real-world interaction records, user and item agents are
prompted to reflect on and adjust the misleading simulations collaboratively,
thereby modeling their two-sided relations. The optimized agents can also
propagate their preferences to other agents in subsequent interactions,
implicitly capturing the collaborative filtering idea. Overall, the optimized
agents exhibit diverse interaction behaviors within our framework, including
user-item, user-user, item-item, and collective interactions. The results show
that these agents can demonstrate personalized behaviors akin to those of
real-world individuals, sparking the development of next-generation user
behavior simulation.
Related papers
- ReSpAct: Harmonizing Reasoning, Speaking, and Acting Towards Building Large Language Model-Based Conversational AI Agents [11.118991548784459]
Large language model (LLM)-based agents have been increasingly used to interact with external environments.
Current frameworks do not enable these agents to work with users and interact with them to align on the details of their tasks.
This work introduces ReSpAct, a novel framework that combines the essential skills for building task-oriented "conversational" agents.
arXiv Detail & Related papers (2024-11-01T15:57:45Z) - FLOW: A Feedback LOop FrameWork for Simultaneously Enhancing Recommendation and User Agents [28.25107058257086]
We propose a novel framework named FLOW, which achieves collaboration between the recommendation agent and the user agent by introducing a feedback loop.
Specifically, the recommendation agent refines its understanding of the user's preferences by analyzing the user agent's feedback on previously suggested items.
This iterative refinement process enhances the reasoning capabilities of both the recommendation agent and the user agent, enabling more precise recommendations.
arXiv Detail & Related papers (2024-10-26T00:51:39Z) - Learning to Use Tools via Cooperative and Interactive Agents [58.77710337157665]
Tool learning empowers large language models (LLMs) as agents to use external tools and extend their utility.
We propose ConAgents, a Cooperative and interactive Agents framework, which coordinates three specialized agents for tool selection, tool execution, and action calibration separately.
Our experiments on three datasets show that the LLMs, when equipped with ConAgents, outperform baselines with substantial improvement.
arXiv Detail & Related papers (2024-03-05T15:08:16Z) - Affordable Generative Agents [16.372072265248192]
Affordable Generative Agents (AGA) is a framework for enabling the generation of believable and low-cost interactions on both agent-environment and inter-agents levels.
Our code is publicly available at: https://github.com/AffordableGenerativeAgents/Affordable-Generative-Agents.
arXiv Detail & Related papers (2024-02-03T06:16:28Z) - On Generative Agents in Recommendation [58.42840923200071]
Agent4Rec is a user simulator in recommendation based on Large Language Models.
Each agent interacts with personalized recommender models in a page-by-page manner.
arXiv Detail & Related papers (2023-10-16T06:41:16Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Generative Agents: Interactive Simulacra of Human Behavior [86.1026716646289]
We introduce generative agents--computational software agents that simulate believable human behavior.
We describe an architecture that extends a large language model to store a complete record of the agent's experiences.
We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims.
arXiv Detail & Related papers (2023-04-07T01:55:19Z) - SPA: Verbal Interactions between Agents and Avatars in Shared Virtual
Environments using Propositional Planning [61.335252950832256]
Sense-Plan-Ask, or SPA, generates plausible verbal interactions between virtual human-like agents and user avatars in shared virtual environments.
We find that our algorithm creates a small runtime cost and enables agents to complete their goals more effectively than agents without the ability to leverage natural-language communication.
arXiv Detail & Related papers (2020-02-08T23:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.