User Embedding Model for Personalized Language Prompting
- URL: http://arxiv.org/abs/2401.04858v1
- Date: Wed, 10 Jan 2024 00:35:52 GMT
- Title: User Embedding Model for Personalized Language Prompting
- Authors: Sumanth Doddapaneni, Krishna Sayana, Ambarish Jash, Sukhdeep Sodhi,
Dima Kuzmin
- Abstract summary: We introduce a new User Embedding Module (UEM) that efficiently processes user history in free-form text by compressing and representing them as embeddings.
Our experiments demonstrate the superior capability of this approach in handling significantly longer histories.
The main contribution of this research is to demonstrate the ability to bias language models with user signals represented as embeddings.
- Score: 9.472634942498859
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling long histories plays a pivotal role in enhancing recommendation
systems, allowing to capture user's evolving preferences, resulting in more
precise and personalized recommendations. In this study we tackle the
challenges of modeling long user histories for preference understanding in
natural language. Specifically, we introduce a new User Embedding Module (UEM)
that efficiently processes user history in free-form text by compressing and
representing them as embeddings, to use them as soft prompts to a LM. Our
experiments demonstrate the superior capability of this approach in handling
significantly longer histories compared to conventional text based prompting
methods, yielding substantial improvements in predictive performance. The main
contribution of this research is to demonstrate the ability to bias language
models with user signals represented as embeddings.
Related papers
- PERSOMA: PERsonalized SOft ProMpt Adapter Architecture for Personalized Language Prompting [44.32537382154617]
PERSOMA offers a novel approach to efficiently capture user history.
It achieves this by resampling and compressing interactions as free form text into expressive soft prompt embeddings.
Our results demonstrate PERSOMA's superior ability to handle large and complex user histories compared to existing embedding-based and text-prompt-based techniques.
arXiv Detail & Related papers (2024-08-02T00:24:22Z) - Language Representations Can be What Recommenders Need: Findings and Potentials [57.90679739598295]
We show that item representations, when linearly mapped from advanced LM representations, yield superior recommendation performance.
This outcome suggests the possible homomorphism between the advanced language representation space and an effective item representation space for recommendation.
Our findings highlight the connection between language modeling and behavior modeling, which can inspire both natural language processing and recommender system communities.
arXiv Detail & Related papers (2024-07-07T17:05:24Z) - Preference Distillation for Personalized Generative Recommendation [11.27949757550442]
We propose a PErsonAlized PrOmpt Distillation (PeaPOD) approach to distill user preferences as personalized soft prompts.
Considering the complexities of user preferences in the real world, we maintain a shared set of learnable prompts that are dynamically weighted based on the user's interests.
Experimental results on three real-world datasets demonstrate the effectiveness of our PeaPOD model on sequential recommendation, top-n recommendation, and explanation generation tasks.
arXiv Detail & Related papers (2024-07-06T09:58:58Z) - SPAR: Personalized Content-Based Recommendation via Long Engagement Attention [43.04717491985609]
Leveraging users' long engagement histories is essential for personalized content recommendations.
We introduce a content-based recommendation framework, SPAR, which effectively tackles the challenges of holistic user interest extraction.
Our framework outperforms existing state-of-the-art (SoTA) methods.
arXiv Detail & Related papers (2024-02-16T10:36:38Z) - RecExplainer: Aligning Large Language Models for Explaining Recommendation Models [50.74181089742969]
Large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following.
This paper presents the initial exploration of using LLMs as surrogate models to explain black-box recommender models.
To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment.
arXiv Detail & Related papers (2023-11-18T03:05:43Z) - Reformulating Sequential Recommendation: Learning Dynamic User Interest with Content-enriched Language Modeling [18.297332953450514]
We propose LANCER, which leverages the semantic understanding capabilities of pre-trained language models to generate personalized recommendations.
Our approach bridges the gap between language models and recommender systems, resulting in more human-like recommendations.
arXiv Detail & Related papers (2023-09-19T08:54:47Z) - Unlocking the Potential of User Feedback: Leveraging Large Language
Model as User Simulator to Enhance Dialogue System [65.93577256431125]
We propose an alternative approach called User-Guided Response Optimization (UGRO) to combine it with a smaller task-oriented dialogue model.
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
Our approach outperforms previous state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2023-06-16T13:04:56Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.