Aligning LLM Agents by Learning Latent Preference from User Edits
- URL: http://arxiv.org/abs/2404.15269v2
- Date: Sun, 9 Jun 2024 21:45:09 GMT
- Title: Aligning LLM Agents by Learning Latent Preference from User Edits
- Authors: Ge Gao, Alexey Taymanov, Eduardo Salinas, Paul Mineiro, Dipendra Misra,
- Abstract summary: We study interactive learning of language agents based on user edits made to the agent's output.
We propose a learning framework, PRELUDE, that infers a description of the user's latent preference based on historic edit data.
We introduce two interactive environments -- summarization and email writing, and use a GPT-4 simulated user for evaluation.
- Score: 23.235995078727658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study interactive learning of LLM-based language agents based on user edits made to the agent's output. In a typical setting such as writing assistants, the user interacts with a language agent to generate a response given a context, and may optionally edit the agent response to personalize it based on their latent preference, in addition to improving the correctness. The edit feedback is naturally generated, making it a suitable candidate for improving the agent's alignment with the user's preference, and for reducing the cost of user edits over time. We propose a learning framework, PRELUDE that infers a description of the user's latent preference based on historic edit data. The inferred user preference descriptions are used to define prompts for generating responses in the future. This avoids fine-tuning the agent, which is costly, challenging to scale with the number of users, and may even degrade its performance on other tasks. Furthermore, learning descriptive preference improves interpretability, allowing the user to view and modify the learned preference. However, user preference can be complex, subtle, and vary based on context, making it challenging to learn. To address this, we propose a simple yet effective algorithm named CIPHER that leverages the LLM to infer the user preference for a given context based on user edits. In the future, CIPHER retrieves inferred preferences from the k-closest contexts in the history, and forms an aggregate preference for response generation. We introduce two interactive environments -- summarization and email writing, and use a GPT-4 simulated user for evaluation. On both tasks, CIPHER outperforms several baselines by achieving the lowest edit distance cost while only having a small overhead in LLM query cost. Our analysis reports that user preferences learned by CIPHER show significant similarity to the ground truth latent preferences.
Related papers
- Apollonion: Profile-centric Dialog Agent [9.657755354649048]
We propose a framework for dialog agent to incorporate user profiling (initialization, update): user's query and response is analyzed and organized into a structural user profile.
We propose a series of evaluation protocols for personalization: to what extend the response is personal to the different users.
arXiv Detail & Related papers (2024-04-10T03:32:41Z) - Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization [105.3612692153615]
A common technique for aligning large language models (LLMs) relies on acquiring human preferences.
We propose a new axis that is based on eliciting preferences jointly over the instruction-response pairs.
We find that joint preferences over instruction and response pairs can significantly enhance the alignment of LLMs.
arXiv Detail & Related papers (2024-03-31T02:05:40Z) - User-LLM: Efficient LLM Contextualization with User Embeddings [24.099604517203606]
We propose User-LLM, a novel framework that leverages user embeddings to contextualize large language models (LLMs)
Our experiments on MovieLens, Amazon Review, and Google Local Review datasets demonstrate significant performance gains across various tasks.
arXiv Detail & Related papers (2024-02-21T08:03:27Z) - Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts [95.09994361995389]
Relative Preference Optimization (RPO) is designed to discern between more and less preferred responses derived from both identical and related prompts.
RPO has demonstrated a superior ability to align large language models with user preferences and to improve their adaptability during the training process.
arXiv Detail & Related papers (2024-02-12T22:47:57Z) - Personalized Language Modeling from Personalized Human Feedback [49.344833339240566]
Reinforcement Learning from Human Feedback (RLHF) is commonly used to fine-tune large language models to better align with human preferences.
In this work, we aim to address this problem by developing methods for building personalized language models.
arXiv Detail & Related papers (2024-02-06T04:18:58Z) - Active Preference Inference using Language Models and Probabilistic Reasoning [13.523369679010685]
We introduce an inference-time algorithm that helps large language models infer user preferences.
Our algorithm uses a probabilistic model whose conditional distributions are defined by prompting an LLM.
Results in a simplified interactive web shopping setting with real product items show that an LLM equipped with our entropy reduction algorithm outperforms baselines.
arXiv Detail & Related papers (2023-12-19T09:58:54Z) - Interpreting User Requests in the Context of Natural Language Standing
Instructions [89.12540932734476]
We develop NLSI, a language-to-program dataset consisting of over 2.4K dialogues spanning 17 domains.
A key challenge in NLSI is to identify which subset of the standing instructions is applicable to a given dialogue.
arXiv Detail & Related papers (2023-11-16T11:19:26Z) - Eliciting Human Preferences with Language Models [56.68637202313052]
Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.
We propose to use *LMs themselves* to guide the task specification process.
We study GATE in three domains: email validation, content recommendation, and moral reasoning.
arXiv Detail & Related papers (2023-10-17T21:11:21Z) - Factual and Personalized Recommendations using Language Models and
Reinforcement Learning [38.96462170594542]
We develop a comPelling, Precise, Personalized, Preference-relevant language model (P4LM)
P4LM recommends items to users while putting emphasis on explaining item characteristics and their relevance.
We develop a joint reward function that measures precision, appeal, and personalization.
arXiv Detail & Related papers (2023-10-09T21:58:55Z) - Beyond the Chat: Executable and Verifiable Text-Editing with LLMs [87.84199761550634]
Conversational interfaces powered by Large Language Models (LLMs) have recently become a popular way to obtain feedback during document editing.
We present InkSync, an editing interface that suggests executable edits directly within the document being edited.
arXiv Detail & Related papers (2023-09-27T00:56:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.