User Embedding Model for Personalized Language Prompting
- URL: http://arxiv.org/abs/2401.04858v1
- Date: Wed, 10 Jan 2024 00:35:52 GMT
- Title: User Embedding Model for Personalized Language Prompting
- Authors: Sumanth Doddapaneni, Krishna Sayana, Ambarish Jash, Sukhdeep Sodhi,
Dima Kuzmin
- Abstract summary: We introduce a new User Embedding Module (UEM) that efficiently processes user history in free-form text by compressing and representing them as embeddings.
Our experiments demonstrate the superior capability of this approach in handling significantly longer histories.
The main contribution of this research is to demonstrate the ability to bias language models with user signals represented as embeddings.
- Score: 9.472634942498859
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling long histories plays a pivotal role in enhancing recommendation
systems, allowing to capture user's evolving preferences, resulting in more
precise and personalized recommendations. In this study we tackle the
challenges of modeling long user histories for preference understanding in
natural language. Specifically, we introduce a new User Embedding Module (UEM)
that efficiently processes user history in free-form text by compressing and
representing them as embeddings, to use them as soft prompts to a LM. Our
experiments demonstrate the superior capability of this approach in handling
significantly longer histories compared to conventional text based prompting
methods, yielding substantial improvements in predictive performance. The main
contribution of this research is to demonstrate the ability to bias language
models with user signals represented as embeddings.
Related papers
- SPAR: Personalized Content-Based Recommendation via Long Engagement Attention [43.04717491985609]
Leveraging users' long engagement histories is essential for personalized content recommendations.
We introduce a content-based recommendation framework, SPAR, which effectively tackles the challenges of holistic user interest extraction.
Our framework outperforms existing state-of-the-art (SoTA) methods.
arXiv Detail & Related papers (2024-02-16T10:36:38Z) - Personalized Language Modeling from Personalized Human Feedback [49.344833339240566]
Reinforcement Learning from Human Feedback (RLHF) is commonly used to fine-tune large language models to better align with human preferences.
In this work, we aim to address this problem by developing methods for building personalized language models.
arXiv Detail & Related papers (2024-02-06T04:18:58Z) - Large Language Models for Intent-Driven Session Recommendations [34.64421003286209]
We introduce a novel ISR approach, utilizing the advanced reasoning capabilities of large language models (LLMs)
We introduce an innovative prompt optimization mechanism that iteratively self-reflects and adjusts prompts.
This new paradigm empowers LLMs to discern diverse user intents at a semantic level, leading to more accurate and interpretable session recommendations.
arXiv Detail & Related papers (2023-12-07T02:25:14Z) - RecExplainer: Aligning Large Language Models for Explaining Recommendation Models [50.74181089742969]
Large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following.
This paper presents the initial exploration of using LLMs as surrogate models to explain black-box recommender models.
To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment.
arXiv Detail & Related papers (2023-11-18T03:05:43Z) - Reformulating Sequential Recommendation: Learning Dynamic User Interest with Content-enriched Language Modeling [18.297332953450514]
We propose LANCER, which leverages the semantic understanding capabilities of pre-trained language models to generate personalized recommendations.
Our approach bridges the gap between language models and recommender systems, resulting in more human-like recommendations.
arXiv Detail & Related papers (2023-09-19T08:54:47Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Unlocking the Potential of User Feedback: Leveraging Large Language
Model as User Simulator to Enhance Dialogue System [65.93577256431125]
We propose an alternative approach called User-Guided Response Optimization (UGRO) to combine it with a smaller task-oriented dialogue model.
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
Our approach outperforms previous state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2023-06-16T13:04:56Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - Modeling User Behaviour in Research Paper Recommendation System [8.980876474818153]
A user intention model is proposed based on deep sequential topic analysis.
The model predicts a user's intention in terms of the topic of interest.
The proposed approach introduces a new road map to model a user activity suitable for the design of a research paper recommendation system.
arXiv Detail & Related papers (2021-07-16T11:31:03Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.