Synthetic Socratic Debates: Examining Persona Effects on Moral Decision and Persuasion Dynamics
- URL: http://arxiv.org/abs/2506.12657v1
- Date: Sat, 14 Jun 2025 23:14:49 GMT
- Title: Synthetic Socratic Debates: Examining Persona Effects on Moral Decision and Persuasion Dynamics
- Authors: Jiarui Liu, Yueqi Song, Yunze Xiao, Mingqian Zheng, Lindia Tjuatja, Jana Schaich Borg, Mona Diab, Maarten Sap,
- Abstract summary: We present the first large-scale study of multi-dimensional persona effects in AI-AI debates over real-world moral dilemmas.<n>Our results show that personas affect initial moral stances and debate outcomes, with political ideology and personality traits exerting the strongest influence.<n>Persuasive success varies across traits, with liberal and open personalities reaching higher consensus and win rates.
- Score: 22.87895625823716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As large language models (LLMs) are increasingly used in morally sensitive domains, it is crucial to understand how persona traits affect their moral reasoning and persuasive behavior. We present the first large-scale study of multi-dimensional persona effects in AI-AI debates over real-world moral dilemmas. Using a 6-dimensional persona space (age, gender, country, class, ideology, and personality), we simulate structured debates between AI agents over 131 relationship-based cases. Our results show that personas affect initial moral stances and debate outcomes, with political ideology and personality traits exerting the strongest influence. Persuasive success varies across traits, with liberal and open personalities reaching higher consensus and win rates. While logit-based confidence grows during debates, emotional and credibility-based appeals diminish, indicating more tempered argumentation over time. These trends mirror findings from psychology and cultural studies, reinforcing the need for persona-aware evaluation frameworks for AI moral reasoning.
Related papers
- AI Debate Aids Assessment of Controversial Claims [86.47978525513236]
We study whether AI debate can guide biased judges toward the truth by having two AI systems debate opposing sides of controversial COVID-19 factuality claims.<n>In our human study, we find that debate-where two AI advisor systems present opposing evidence-based arguments-consistently improves judgment accuracy and confidence calibration.<n>In our AI judge study, we find that AI judges with human-like personas achieve even higher accuracy (78.5%) than human judges (70.1%) and default AI judges without personas (69.8%)
arXiv Detail & Related papers (2025-06-02T19:01:53Z) - Must Read: A Systematic Survey of Computational Persuasion [60.83151988635103]
AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence.<n>Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion.
arXiv Detail & Related papers (2025-05-12T17:26:31Z) - AI persuading AI vs AI persuading Humans: LLMs' Differential Effectiveness in Promoting Pro-Environmental Behavior [70.24245082578167]
Pro-environmental behavior (PEB) is vital to combat climate change, yet turning awareness into intention and action remains elusive.<n>We explore large language models (LLMs) as tools to promote PEB, comparing their impact across 3,200 participants.<n>Results reveal a "synthetic persuasion paradox": synthetic and simulated agents significantly affect their post-intervention PEB stance, while human responses barely shift.
arXiv Detail & Related papers (2025-03-03T21:40:55Z) - Can Artificial Intelligence Embody Moral Values? [0.0]
neutrality thesis holds that technology cannot be laden with values.
In this paper, we argue that artificial intelligence, particularly artificial agents that autonomously make decisions to pursue their goals, challenge the neutrality thesis.
Our central claim is that the computational models underlying artificial agents can integrate representations of moral values such as fairness, honesty and avoiding harm.
arXiv Detail & Related papers (2024-08-22T09:39:16Z) - Can ChatGPT Read Who You Are? [10.577227353680994]
We report the results of a comprehensive user study featuring texts written in Czech by a representative population sample of 155 participants.
We compare the personality trait estimations made by ChatGPT against those by human raters and report ChatGPT's competitive performance in inferring personality traits from text.
arXiv Detail & Related papers (2023-12-26T14:43:04Z) - Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties [68.66719970507273]
Value pluralism is the view that multiple correct values may be held in tension with one another.
As statistical learners, AI systems fit to averages by default, washing out potentially irreducible value conflicts.
We introduce ValuePrism, a large-scale dataset of 218k values, rights, and duties connected to 31k human-written situations.
arXiv Detail & Related papers (2023-09-02T01:24:59Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - The Moral Debater: A Study on the Computational Generation of Morally
Framed Arguments [19.741685596196454]
An audience's prior beliefs and morals are strong indicators of how likely they will be affected by a given argument.
We propose a system that effectively generates arguments focusing on different morals.
Our results suggest that, particularly when prior beliefs are challenged, an audience becomes more affected by morally framed arguments.
arXiv Detail & Related papers (2022-03-28T08:07:13Z) - Contextualized moral inference [12.574316678945195]
We present a text-based approach that predicts people's intuitive judgment of moral vignettes.
We show that a contextualized representation offers a substantial advantage over alternative representations.
arXiv Detail & Related papers (2020-08-25T00:34:28Z) - What Changed Your Mind: The Roles of Dynamic Topics and Discourse in
Argumentation Process [78.4766663287415]
This paper presents a study that automatically analyzes the key factors in argument persuasiveness.
We propose a novel neural model that is able to track the changes of latent topics and discourse in argumentative conversations.
arXiv Detail & Related papers (2020-02-10T04:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.