ProPerSim: Developing Proactive and Personalized AI Assistants through User-Assistant Simulation
- URL: http://arxiv.org/abs/2509.21730v1
- Date: Fri, 26 Sep 2025 00:57:27 GMT
- Title: ProPerSim: Developing Proactive and Personalized AI Assistants through User-Assistant Simulation
- Authors: Jiho Kim, Junseong Choi, Woosog Chay, Daeun Kyung, Yeonsu Kwon, Yohan Jo, Edward Choi,
- Abstract summary: We introduce ProPerSim, a task and simulation framework for developing AI assistants capable of making timely, personalized recommendations.<n>We propose ProPerAssistant, a retrieval-augmented, preference-aligned assistant that continually learns and adapts through user feedback.
- Score: 26.512935389758727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As large language models (LLMs) become increasingly integrated into daily life, there is growing demand for AI assistants that are not only reactive but also proactive and personalized. While recent advances have pushed forward proactivity and personalization individually, their combination remains underexplored. To bridge this gap, we introduce ProPerSim, a new task and simulation framework for developing assistants capable of making timely, personalized recommendations in realistic home scenarios. In our simulation environment, a user agent with a rich persona interacts with the assistant, providing ratings on how well each suggestion aligns with its preferences and context. The assistant's goal is to use these ratings to learn and adapt to achieve higher scores over time. Built on ProPerSim, we propose ProPerAssistant, a retrieval-augmented, preference-aligned assistant that continually learns and adapts through user feedback. Experiments across 32 diverse personas show that ProPerAssistant adapts its strategy and steadily improves user satisfaction, highlighting the promise of uniting proactivity and personalization.
Related papers
- Pushing Forward Pareto Frontiers of Proactive Agents with Behavioral Agentic Optimization [61.641777037967366]
Proactive large language model (LLM) agents aim to actively plan, query, and interact over multiple turns.<n>Agentic reinforcement learning (RL) has emerged as a promising solution for training such agents in multi-turn settings.<n>We propose BAO, an agentic RL framework that combines behavior enhancement to enrich proactive reasoning and information-gathering capabilities.
arXiv Detail & Related papers (2026-02-11T20:40:43Z) - The PROPER Approach to Proactivity: Benchmarking and Advancing Knowledge Gap Navigation [17.97529450470058]
Most language-based assistants follow a reactive ask-and-respond paradigm, requiring users to explicitly state their needs.<n>We introduce ProPer, a novel two-agent architecture consisting of a Dimension Generating Agent (DGA) and a Response Generating Agent (RGA)<n>RGA balances explicit and implicit dimensions to tailor personalized responses with timely and proactive interventions.<n>Our results show that ProPer improves quality scores and win rates across all domains, achieving up to 84% gains in single-turn evaluation and consistent dominance in multi-turn interactions.
arXiv Detail & Related papers (2026-01-14T23:13:01Z) - Towards Proactive Personalization through Profile Customization for Individual Users in Dialogues [28.522406727886395]
PersonalAgent is a lifelong agent designed to continuously infer and adapt to user preferences.<n>Experiments show that PersonalAgent achieves superior performance over strong prompt-based and policy optimization baselines.<n>Our findings underscore the importance of lifelong personalization for developing more inclusive and adaptive conversational agents.
arXiv Detail & Related papers (2025-12-17T10:47:06Z) - Is Passive Expertise-Based Personalization Enough? A Case Study in AI-Assisted Test-Taking [29.26173340915243]
Expert users have different systematic preferences in task-oriented dialogues.<n>We built a version of an enterprise AI assistant with passive personalization.<n>Preliminary results indicate that passive personalization helps reduce task load and improve assistant perception.<n>These findings underscore the importance of combining active and passive personalization to optimize user experience and effectiveness in enterprise task-oriented environments.
arXiv Detail & Related papers (2025-11-28T17:21:41Z) - Training Proactive and Personalized LLM Agents [107.57805582180315]
We introduce PPP, a multi-objective reinforcement learning approach that jointly optimize all three dimensions: Productivity, Proactivity, and Personalization.<n>Experiments show that agents trained with PPP achieve substantial improvements over strong baselines such as GPT-5 (+21.6 on average)<n>This work demonstrates that explicitly optimizing for user-centered interaction is critical for building practical and effective AI agents.
arXiv Detail & Related papers (2025-11-04T02:59:36Z) - RecoWorld: Building Simulated Environments for Agentic Recommender Systems [55.979427290369216]
We present RecoWorld, a blueprint for building simulated environments tailored to agentic recommender systems.<n>A user simulator reviews recommended items, updates its mindset, and when sensing potential user disengagement, generates reflective instructions.<n>The agentic recommender adapts its recommendations by incorporating these user instructions and reasoning traces, creating a dynamic feedback loop.
arXiv Detail & Related papers (2025-09-12T16:44:34Z) - Thought-Augmented Planning for LLM-Powered Interactive Recommender Agent [56.61028117645315]
We propose a novel thought-augmented interactive recommender agent system (TAIRA) that addresses complex user intents through distilled thought patterns.<n>Specifically, TAIRA is designed as an LLM-powered multi-agent system featuring a manager agent that orchestrates recommendation tasks by decomposing user needs and planning subtasks.<n>Through comprehensive experiments conducted across multiple datasets, TAIRA exhibits significantly enhanced performance compared to existing methods.
arXiv Detail & Related papers (2025-06-30T03:15:50Z) - PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time [87.99027488664282]
PersonaAgent is a framework designed to address versatile personalization tasks.<n>It integrates a personalized memory module and a personalized action module.<n>Test-time user-preference alignment strategy ensures real-time user preference alignment.
arXiv Detail & Related papers (2025-06-06T17:29:49Z) - From Strangers to Assistants: Fast Desire Alignment for Embodied Agent-User Adaptation [24.232670566927972]
We develop a home assistance simulation environment HA-Desire that integrates an LLM-driven human user agent.<n>We present a novel framework FAMER for fast desire alignment, which introduces a desire-based mental reasoning mechanism.<n>Our framework significantly enhances both task execution and communication efficiency, enabling embodied agents to quickly adapt to user-specific desires.
arXiv Detail & Related papers (2025-05-28T15:51:13Z) - Modeling and Optimizing User Preferences in AI Copilots: A Comprehensive Survey and Taxonomy [5.985777189633703]
AI copilots represent a new generation of AI-powered systems designed to assist users in complex, context-rich tasks.<n>Central to this personalization is preference optimization: the system's ability to detect, interpret, and align with individual user preferences.<n>This survey examines how user preferences are operationalized in AI copilots.
arXiv Detail & Related papers (2025-05-28T02:52:39Z) - Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward [11.495697919066341]
We propose leveraging a user model to incorporate a curiosity-based intrinsic reward into multi-turn RLHF.<n>This novel reward mechanism encourages the LLM agent to actively infer user traits by optimizing conversations to improve its user model's accuracy.<n>We demonstrate our method's effectiveness in two distinct domains: significantly improving personalization performance in a conversational recommendation task, and personalizing conversations for different learning styles in an educational setting.
arXiv Detail & Related papers (2025-04-04T06:35:02Z) - SmartAgent: Chain-of-User-Thought for Embodied Personalized Agent in Cyber World [50.937342998351426]
Chain-of-User-Thought (COUT) is a novel embodied reasoning paradigm.<n>We introduce SmartAgent, an agent framework perceiving cyber environments and reasoning personalized requirements.<n>Our work is the first to formulate the COUT process, serving as a preliminary attempt towards embodied personalized agent learning.
arXiv Detail & Related papers (2024-12-10T12:40:35Z) - AutoPal: Autonomous Adaptation to Users for Personal AI Companionship [41.41280146492634]
This paper emphasizes the necessity of autonomous adaptation in personal AI companionship.<n>We devise a hierarchical framework, AutoPal, that enables controllable and authentic adjustments to the agent's persona.<n>Experiments demonstrate the effectiveness of AutoPal and highlight the importance of autonomous adaptability in AI companionship.
arXiv Detail & Related papers (2024-06-20T03:02:38Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.