From Simulation to Strategy: Automating Personalized Interaction Planning for Conversational Agents
- URL: http://arxiv.org/abs/2510.08621v1
- Date: Wed, 08 Oct 2025 09:12:33 GMT
- Title: From Simulation to Strategy: Automating Personalized Interaction Planning for Conversational Agents
- Authors: Wen-Yu Chang, Tzu-Hung Huang, Chih-Ho Chen, Yun-Nung Chen,
- Abstract summary: This work investigates a sales-oriented agent that adapts its dialogue based on user profiles spanning age, gender, and occupation.<n>We introduce a lightweight, occupation-conditioned strategy that guides the agent to prioritize intents aligned with user preferences.
- Score: 17.59366879306331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Amid the rapid rise of agentic dialogue models, realistic user-simulator studies are essential for tuning effective conversation strategies. This work investigates a sales-oriented agent that adapts its dialogue based on user profiles spanning age, gender, and occupation. While age and gender influence overall performance, occupation produces the most pronounced differences in conversational intent. Leveraging this insight, we introduce a lightweight, occupation-conditioned strategy that guides the agent to prioritize intents aligned with user preferences, resulting in shorter and more successful dialogues. Our findings highlight the importance of rich simulator profiles and demonstrate how simple persona-informed strategies can enhance the effectiveness of sales-oriented dialogue systems.
Related papers
- Agentic Conversational Search with Contextualized Reasoning via Reinforcement Learning [66.52010873968383]
We introduce a conversational agent that interleaves search and reasoning across turns, enabling exploratory and adaptive behaviors learned through reinforcement learning (RL) training.<n>The experimental results across four widely used conversational benchmarks demonstrate the effectiveness of our methods.
arXiv Detail & Related papers (2026-01-19T14:55:54Z) - PRINCIPLES: Synthetic Strategy Memory for Proactive Dialogue Agents [16.819463022406627]
We propose PRINCIPLES: a synthetic strategy memory for proactive dialogue agents.<n> PRINCIPLES is derived through offline self-play simulations and serves as reusable knowledge that guides strategy planning.<n>We evaluate PRINCIPLES in both emotional support and persuasion domains, demonstrating consistent improvements over strong baselines.
arXiv Detail & Related papers (2025-09-22T07:53:59Z) - Aligning Spoken Dialogue Models from User Interactions [55.192134724622235]
We propose a novel preference alignment framework to improve spoken dialogue models on realtime conversations from user interactions.<n>We create a dataset of more than 150,000 preference pairs from raw multi-turn speech conversations annotated with AI feedback.<n>Our findings shed light on the importance of a well-calibrated balance among various dynamics, crucial for natural real-time speech dialogue systems.
arXiv Detail & Related papers (2025-06-26T16:45:20Z) - Exploring Personality-Aware Interactions in Salesperson Dialogue Agents [21.282523537612477]
This study explores the influence of user personas, defined using the Myers-Briggs Type Indicator (MBTI), on the interaction quality and performance of sales-oriented dialogue agents.<n>Our findings reveal significant patterns in interaction dynamics, task completion rates, and dialogue naturalness, underscoring the future potential for dialogue agents to refine their strategies.
arXiv Detail & Related papers (2025-04-25T04:10:25Z) - Towards Personalized Conversational Sales Agents: Contextual User Profiling for Strategic Action [12.637812936971049]
We present Conversational Sales (CSALES), a novel task that integrates preference elicitation, recommendation and persuasion within a unified conversational framework.<n>We also propose CSI, a conversational sales agent that proactively infers contextual user profiles and strategically selects actions through conversation.
arXiv Detail & Related papers (2025-03-28T15:49:52Z) - Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Self-Emotion Blended Dialogue Generation in Social Simulation Agents [18.781929161272853]
This study explores how self-emotion affects the agents' behaviors in dialogue strategies and decision-making within a large language model (LLM)-driven simulation framework.
The results show that incorporating self-emotion helps agents exhibit more human-like dialogue strategies.
In a virtual simulation environment where agents have discussions on multiple topics, we show that self-emotion of agents can significantly influence the decision-making process of the agents.
arXiv Detail & Related papers (2024-08-03T02:11:48Z) - Injecting Salesperson's Dialogue Strategies in Large Language Models with Chain-of-Thought Reasoning [23.919423630938226]
SalesBot simulates dialogues transitioning from chit-chat to task-oriented scenarios to train sales agents.
Initial data lacked smooth transitions and coherent long-turn dialogues, resulting in poor naturalness in sales-customer interactions.
We introduce a novel model called SalesAgent, trained on salesperson's interactions, using chain-of-thought (CoT) reasoning.
arXiv Detail & Related papers (2024-04-29T10:12:04Z) - Self-Explanation Prompting Improves Dialogue Understanding in Large
Language Models [52.24756457516834]
We propose a novel "Self-Explanation" prompting strategy to enhance the comprehension abilities of Large Language Models (LLMs)
This task-agnostic approach requires the model to analyze each dialogue utterance before task execution, thereby improving performance across various dialogue-centric tasks.
Experimental results from six benchmark datasets confirm that our method consistently outperforms other zero-shot prompts and matches or exceeds the efficacy of few-shot prompts.
arXiv Detail & Related papers (2023-09-22T15:41:34Z) - JoTR: A Joint Transformer and Reinforcement Learning Framework for
Dialog Policy Learning [53.83063435640911]
Dialogue policy learning (DPL) is a crucial component of dialogue modelling.
We introduce a novel framework, JoTR, to generate flexible dialogue actions.
Unlike traditional methods, JoTR formulates a word-level policy that allows for a more dynamic and adaptable dialogue action generation.
arXiv Detail & Related papers (2023-09-01T03:19:53Z) - Interacting with Non-Cooperative User: A New Paradigm for Proactive
Dialogue Policy [83.61404191470126]
We propose a new solution named I-Pro that can learn Proactive policy in the Interactive setting.
Specifically, we learn the trade-off via a learned goal weight, which consists of four factors.
The experimental results demonstrate I-Pro significantly outperforms baselines in terms of effectiveness and interpretability.
arXiv Detail & Related papers (2022-04-07T14:11:31Z) - Learning Goal-oriented Dialogue Policy with Opposite Agent Awareness [116.804536884437]
We propose an opposite behavior aware framework for policy learning in goal-oriented dialogues.
We estimate the opposite agent's policy from its behavior and use this estimation to improve the target agent by regarding it as part of the target policy.
arXiv Detail & Related papers (2020-04-21T03:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.