CoRA: Collaborative Information Perception by Large Language Model's Weights for Recommendation
- URL: http://arxiv.org/abs/2408.10645v3
- Date: Fri, 25 Oct 2024 10:23:02 GMT
- Title: CoRA: Collaborative Information Perception by Large Language Model's Weights for Recommendation
- Authors: Yuting Liu, Jinghao Zhang, Yizhou Dang, Yuliang Liang, Qiang Liu, Guibing Guo, Jianzhe Zhao, Xingwei Wang,
- Abstract summary: Involving collaborative information in Large Language Models (LLMs) is a promising technique for adapting LLMs for recommendation.
Existing methods achieve this by concatenating collaborative features with text tokens into a unified sequence input.
We propose a new paradigm, textbfCollaborative textbfLoRA, with a collaborative query generator.
- Score: 13.867950651601483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Involving collaborative information in Large Language Models (LLMs) is a promising technique for adapting LLMs for recommendation. Existing methods achieve this by concatenating collaborative features with text tokens into a unified sequence input and then fine-tuning to align these features with LLM's input space. Although effective, in this work, we identify two limitations when adapting LLMs to recommendation tasks, which hinder the integration of general knowledge and collaborative information, resulting in sub-optimal recommendation performance. (1) Fine-tuning LLM with recommendation data can undermine its inherent world knowledge and fundamental competencies, which are crucial for interpreting and inferring recommendation text. (2) Incorporating collaborative features into textual prompts disrupts the semantics of the original prompts, preventing LLM from generating appropriate outputs. In this paper, we propose a new paradigm, \textbf{Co}llaborative \textbf{Lo}RA (CoRA), with a collaborative query generator. Rather than input space alignment, this method aligns collaborative information with LLM's parameter space, representing them as incremental weights to update LLM's output. This way, LLM perceives collaborative information without altering its general knowledge and text inference capabilities. Specifically, we employ a collaborative filtering model to extract user and item embeddings and inject them into a set number of learnable queries. We then convert collaborative queries into collaborative weights with low-rank properties and merge the collaborative weights into LLM's weights, enabling LLM to perceive the collaborative signals and generate personalized recommendations without fine-tuning or extra collaborative tokens in prompts. Extensive experiments confirm that CoRA effectively integrates collaborative information into LLM, enhancing recommendation performance.
Related papers
- Beyond Retrieval: Generating Narratives in Conversational Recommender Systems [4.912663905306209]
We introduce a new dataset (REGEN) for natural language generation tasks in conversational recommendations.
We establish benchmarks using well-known generative metrics, and perform an automated evaluation of the new dataset using a rater LLM.
And to the best of our knowledge, represents the first attempt to analyze the capabilities of LLMs in understanding recommender signals and generating rich narratives.
arXiv Detail & Related papers (2024-10-22T07:53:41Z) - Enhancing High-order Interaction Awareness in LLM-based Recommender Model [3.7623606729515133]
This paper presents an enhanced LLM-based recommender (ELMRec)
We enhance whole-word embeddings to substantially enhance LLMs' interpretation of graph-constructed interactions for recommendations.
Our ELMRec outperforms state-of-the-art (SOTA) methods in both direct and sequential recommendations.
arXiv Detail & Related papers (2024-09-30T06:07:12Z) - Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information [76.62949982303532]
We propose a parameter-efficient Large Language Model Bi-Tuning framework for sequential recommendation with collaborative information (Laser)
In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.
M-Former is a lightweight MoE-based querying transformer that uses a set of query experts to integrate diverse user-specific collaborative information encoded by frozen ID-based sequential recommender systems.
arXiv Detail & Related papers (2024-09-03T04:55:03Z) - LARR: Large Language Model Aided Real-time Scene Recommendation with Semantic Understanding [19.510385758079966]
Large Language Model Aided Real-time Scene Recommendation(LARR)
This paper introduces Large Language Model Aided Real-time Scene Recommendation(LARR)
arXiv Detail & Related papers (2024-08-21T10:56:26Z) - Towards Efficient LLM Grounding for Embodied Multi-Agent Collaboration [70.09561665520043]
We propose a novel framework for multi-agent collaboration that introduces Reinforced Advantage feedback (ReAd) for efficient self-refinement of plans.
We provide theoretical analysis by extending advantage-weighted regression in reinforcement learning to multi-agent systems.
Experiments on Over-AI and a difficult variant of RoCoBench show that ReAd surpasses baselines in success rate, and also significantly decreases the interaction steps of agents.
arXiv Detail & Related papers (2024-05-23T08:33:19Z) - Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Large Language Models (LLMs) pretrained on massive text corpus presents a promising avenue for enhancing recommender systems.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve
Long-tail Recommendation [34.29410946387975]
We introduce collaborative retrieval-augmented LLMs, CoRAL, which directly incorporate collaborative evidence into prompts.
LLMs can analyze shared and distinct preferences among users, and summarize the patterns indicating which types of users would be attracted by certain items.
Our experimental results show that CoRAL can significantly improve LLMs' reasoning abilities on specific recommendation tasks.
arXiv Detail & Related papers (2024-03-11T05:49:34Z) - MEGAnno+: A Human-LLM Collaborative Annotation System [6.10245234757137]
Large language models (LLMs) can label data faster and cheaper than humans for various NLP tasks.
Despite their prowess, LLMs may fall short in understanding of complex, sociocultural, or domain-specific context.
We advocate a collaborative approach where humans and LLMs work together to produce reliable and high-quality labels.
arXiv Detail & Related papers (2024-02-28T04:58:07Z) - Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations [53.76682562935373]
We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
arXiv Detail & Related papers (2023-08-31T07:36:44Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.