User-LLM: Efficient LLM Contextualization with User Embeddings
- URL: http://arxiv.org/abs/2402.13598v1
- Date: Wed, 21 Feb 2024 08:03:27 GMT
- Title: User-LLM: Efficient LLM Contextualization with User Embeddings
- Authors: Lin Ning, Luyang Liu, Jiaxing Wu, Neo Wu, Devora Berlowitz, Sushant
Prakash, Bradley Green, Shawn O'Banion, Jun Xie
- Abstract summary: We propose User-LLM, a novel framework that leverages user embeddings to contextualize large language models (LLMs)
Our experiments on MovieLens, Amazon Review, and Google Local Review datasets demonstrate significant performance gains across various tasks.
- Score: 24.099604517203606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have revolutionized natural language processing.
However, effectively incorporating complex and potentially noisy user
interaction data remains a challenge. To address this, we propose User-LLM, a
novel framework that leverages user embeddings to contextualize LLMs. These
embeddings, distilled from diverse user interactions using self-supervised
pretraining, capture latent user preferences and their evolution over time. We
integrate these user embeddings with LLMs through cross-attention and
soft-prompting, enabling LLMs to dynamically adapt to user context. Our
comprehensive experiments on MovieLens, Amazon Review, and Google Local Review
datasets demonstrate significant performance gains across various tasks.
Notably, our approach outperforms text-prompt-based contextualization on long
sequence tasks and tasks that require deep user understanding while being
computationally efficient. We further incorporate Perceiver layers to
streamline the integration between user encoders and LLMs, reducing
computational demands.
Related papers
- Know Me, Respond to Me: Benchmarking LLMs for Dynamic User Profiling and Personalized Responses at Scale [51.9706400130481]
Large Language Models (LLMs) have emerged as personalized assistants for users across a wide range of tasks.
PERSONAMEM features curated user profiles with over 180 simulated user-LLM interaction histories.
We evaluate LLM chatbots' ability to identify the most suitable response according to the current state of the user's profile.
arXiv Detail & Related papers (2025-04-19T08:16:10Z) - HistLLM: A Unified Framework for LLM-Based Multimodal Recommendation with User History Encoding and Compression [33.34435467588446]
HistLLM is an innovative framework that integrates textual and visual features through a User History.
Module (UHEM), compressing user history interactions into a single token representation.
Extensive experiments demonstrate the effectiveness and efficiency of our proposed mechanism.
arXiv Detail & Related papers (2025-04-14T12:01:11Z) - UQABench: Evaluating User Embedding for Prompting LLMs in Personalized Question Answering [39.79275025010785]
name is a benchmark designed to evaluate the effectiveness of user embeddings in prompting large language models for personalization.
We conduct extensive experiments on various state-of-the-art methods for modeling user embeddings.
arXiv Detail & Related papers (2025-02-26T14:34:00Z) - LIBER: Lifelong User Behavior Modeling Based on Large Language Models [42.045535303737694]
We propose Lifelong User Behavior Modeling (LIBER) based on large language models.
LIBER has been deployed on Huawei's music recommendation service and achieved substantial improvements in users' play count and play time by 3.01% and 7.69%.
arXiv Detail & Related papers (2024-11-22T03:43:41Z) - Aligning LLMs with Individual Preferences via Interaction [51.72200436159636]
We train large language models (LLMs) that can ''interact to align''
We develop a multi-turn preference dataset containing 3K+ multi-turn conversations in tree structures.
For evaluation, we establish the ALOE benchmark, consisting of 100 carefully selected examples and well-designed metrics to measure the customized alignment performance during conversations.
arXiv Detail & Related papers (2024-10-04T17:48:29Z) - Beyond Inter-Item Relations: Dynamic Adaption for Enhancing LLM-Based Sequential Recommendation [83.87767101732351]
Sequential recommender systems (SRS) predict the next items that users may prefer based on user historical interaction sequences.
Inspired by the rise of large language models (LLMs) in various AI applications, there is a surge of work on LLM-based SRS.
We propose DARec, a sequential recommendation model built on top of coarse-grained adaption for capturing inter-item relations.
arXiv Detail & Related papers (2024-08-14T10:03:40Z) - Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex Models [66.24055500785657]
Traditional turn-based chat systems prevent users from verbally interacting with system while it is generating responses.
To overcome these limitations, we adapt existing LLMs to listen users while generating output and provide users with instant feedback.
We build a dataset consisting of alternating time slices of queries and responses as well as covering typical feedback types in instantaneous interactions.
arXiv Detail & Related papers (2024-06-22T03:20:10Z) - A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation [15.153844486572932]
This paper proposes a practice-friendly LLM-enhanced paradigm with preference parsing (P2Rec) for sequential recommender systems (SRS)
Specifically, in the information reconstruction stage, we design a new user-level SFT task for collaborative information injection with the assistance of a pre-trained SRS model.
Our goal is to let LLM learn to reconstruct a corresponding prior preference distribution from each user's interaction sequence.
arXiv Detail & Related papers (2024-06-01T07:18:56Z) - Breaking the Length Barrier: LLM-Enhanced CTR Prediction in Long Textual User Behaviors [25.086118164540974]
Large language models (LLMs) are used to improve the performance of click-through rate (CTR) prediction.
As user sequences grow longer, the current efficiency of LLMs is inadequate for training on billions of users and items.
We propose Behavior Aggregated Hierarchical (BAHE) to enhance the efficiency of LLM-based CTR modeling.
arXiv Detail & Related papers (2024-03-28T12:05:15Z) - CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation [60.2700801392527]
We introduce CoLLM, an innovative LLMRec methodology that seamlessly incorporates collaborative information into LLMs for recommendation.
CoLLM captures collaborative information through an external traditional model and maps it to the input token embedding space of LLM.
Extensive experiments validate that CoLLM adeptly integrates collaborative information into LLMs, resulting in enhanced recommendation performance.
arXiv Detail & Related papers (2023-10-30T12:25:00Z) - Do LLMs Understand User Preferences? Evaluating LLMs On User Rating
Prediction [15.793007223588672]
Large Language Models (LLMs) have demonstrated exceptional capabilities in generalizing to new tasks in a zero-shot or few-shot manner.
We investigate various LLMs in different sizes, ranging from 250M to 540B parameters and evaluate their performance in zero-shot, few-shot, and fine-tuning scenarios.
arXiv Detail & Related papers (2023-05-10T21:43:42Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.