How Do LLMs Persuade? Linear Probes Can Uncover Persuasion Dynamics in Multi-Turn Conversations
- URL: http://arxiv.org/abs/2508.05625v1
- Date: Thu, 07 Aug 2025 17:58:41 GMT
- Title: How Do LLMs Persuade? Linear Probes Can Uncover Persuasion Dynamics in Multi-Turn Conversations
- Authors: Brandon Jaipersaud, David Krueger, Ekdeep Singh Lubana,
- Abstract summary: Large Language Models (LLMs) have started to demonstrate the ability to persuade humans.<n>Recent work has used linear probes, lightweight tools for analyzing model representations, to study various LLM skills.<n>Motivated by this, we apply probes to study persuasion dynamics in natural, multi-turn conversations.
- Score: 11.221875709359974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) have started to demonstrate the ability to persuade humans, yet our understanding of how this dynamic transpires is limited. Recent work has used linear probes, lightweight tools for analyzing model representations, to study various LLM skills such as the ability to model user sentiment and political perspective. Motivated by this, we apply probes to study persuasion dynamics in natural, multi-turn conversations. We leverage insights from cognitive science to train probes on distinct aspects of persuasion: persuasion success, persuadee personality, and persuasion strategy. Despite their simplicity, we show that they capture various aspects of persuasion at both the sample and dataset levels. For instance, probes can identify the point in a conversation where the persuadee was persuaded or where persuasive success generally occurs across the entire dataset. We also show that in addition to being faster than expensive prompting-based approaches, probes can do just as well and even outperform prompting in some settings, such as when uncovering persuasion strategy. This suggests probes as a plausible avenue for studying other complex behaviours such as deception and manipulation, especially in multi-turn settings and large-scale dataset analysis where prompting-based methods would be computationally inefficient.
Related papers
- It's the Thought that Counts: Evaluating the Attempts of Frontier LLMs to Persuade on Harmful Topics [5.418014947856176]
We introduce an automated model to identify willingness to persuade and measure the frequency and context of persuasive attempts.<n>We find that many open and closed-weight models are frequently willing to attempt persuasion on harmful topics.
arXiv Detail & Related papers (2025-06-03T13:37:51Z) - Must Read: A Systematic Survey of Computational Persuasion [60.83151988635103]
AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence.<n>Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion.
arXiv Detail & Related papers (2025-05-12T17:26:31Z) - Eliciting Language Model Behaviors with Investigator Agents [93.34072434845162]
Language models exhibit complex, diverse behaviors when prompted with free-form text.<n>We study the problem of behavior elicitation, where the goal is to search for prompts that induce specific target behaviors.<n>We train investigator models to map randomly-chosen target behaviors to a diverse distribution of outputs that elicit them.
arXiv Detail & Related papers (2025-02-03T10:52:44Z) - Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Measuring and Benchmarking Large Language Models' Capabilities to Generate Persuasive Language [41.052284715017606]
We study the ability of Large Language Models (LLMs) to produce persuasive text.<n>As opposed to prior work which focuses on particular domains or types of persuasion, we conduct a general study across various domains.<n>We construct the new dataset Persuasive-Pairs of pairs of pairs of a short text and its rewrite by an LLM to amplify or diminish persuasive language.
arXiv Detail & Related papers (2024-06-25T17:40:47Z) - How do Large Language Models Navigate Conflicts between Honesty and
Helpfulness? [14.706111954807021]
We use psychological models and experiments designed to characterize human behavior to analyze large language models.
We find that reinforcement learning from human feedback improves both honesty and helpfulness.
GPT-4 Turbo demonstrates human-like response patterns including sensitivity to the conversational framing and listener's decision context.
arXiv Detail & Related papers (2024-02-11T19:13:26Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - On the Role of Attention in Prompt-tuning [90.97555030446563]
We study prompt-tuning for one-layer attention architectures and study contextual mixture-models.
We show that softmax-prompt-attention is provably more expressive than softmax-self-attention and linear-prompt-attention.
We also provide experiments that verify our theoretical insights on real datasets and demonstrate how prompt-tuning enables the model to attend to context-relevant information.
arXiv Detail & Related papers (2023-06-06T06:23:38Z) - Werewolf Among Us: A Multimodal Dataset for Modeling Persuasion
Behaviors in Social Deduction Games [45.55448048482881]
We introduce the first multimodal dataset for modeling persuasion behaviors.
Our dataset includes 199 dialogue transcriptions and videos, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes.
arXiv Detail & Related papers (2022-12-16T04:52:53Z) - What Changed Your Mind: The Roles of Dynamic Topics and Discourse in
Argumentation Process [78.4766663287415]
This paper presents a study that automatically analyzes the key factors in argument persuasiveness.
We propose a novel neural model that is able to track the changes of latent topics and discourse in argumentative conversations.
arXiv Detail & Related papers (2020-02-10T04:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.