Must Read: A Systematic Survey of Computational Persuasion
- URL: http://arxiv.org/abs/2505.07775v1
- Date: Mon, 12 May 2025 17:26:31 GMT
- Title: Must Read: A Systematic Survey of Computational Persuasion
- Authors: Nimet Beyza Bozdag, Shuhaib Mehri, Xiaocheng Yang, Hyeonjeong Ha, Zirui Cheng, Esin Durmus, Jiaxuan You, Heng Ji, Gokhan Tur, Dilek Hakkani-Tür,
- Abstract summary: AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence.<n>Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion.
- Score: 60.83151988635103
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Persuasion is a fundamental aspect of communication, influencing decision-making across diverse contexts, from everyday conversations to high-stakes scenarios such as politics, marketing, and law. The rise of conversational AI systems has significantly expanded the scope of persuasion, introducing both opportunities and risks. AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence. Moreover, AI systems are not only persuaders, but also susceptible to persuasion, making them vulnerable to adversarial attacks and bias reinforcement. Despite rapid advancements in AI-generated persuasive content, our understanding of what makes persuasion effective remains limited due to its inherently subjective and context-dependent nature. In this survey, we provide a comprehensive overview of computational persuasion, structured around three key perspectives: (1) AI as a Persuader, which explores AI-generated persuasive content and its applications; (2) AI as a Persuadee, which examines AI's susceptibility to influence and manipulation; and (3) AI as a Persuasion Judge, which analyzes AI's role in evaluating persuasive strategies, detecting manipulation, and ensuring ethical persuasion. We introduce a taxonomy for computational persuasion research and discuss key challenges, including evaluating persuasiveness, mitigating manipulative persuasion, and developing responsible AI-driven persuasive systems. Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion while addressing the risks posed by increasingly capable language models.
Related papers
- Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Persuasion and Safety in the Era of Generative AI [0.0]
The EU AI Act prohibits AI systems that use manipulative or deceptive techniques to undermine informed decision-making.<n>My dissertation addresses the lack of empirical studies in this area by developing a taxonomy of persuasive techniques.<n>It provides resources to mitigate the risks of persuasive AI and fosters discussions on ethical persuasion in the age of generative AI.
arXiv Detail & Related papers (2025-05-18T06:04:46Z) - Aspirational Affordances of AI [0.0]
There are growing concerns about how artificial intelligence systems may confine individuals and groups to static or restricted narratives about who or what they could be.<n>We introduce the concept of aspirational affordance to describe how culturally shared interpretive resources can shape individual cognition.<n>We show how this concept can ground productive evaluations of the risks of AI-enabled representations and narratives.
arXiv Detail & Related papers (2025-04-21T22:37:49Z) - Persuasion Should be Double-Blind: A Multi-Domain Dialogue Dataset With Faithfulness Based on Causal Theory of Mind [21.022976907694265]
Recent persuasive dialogue datasets often fail to align with real-world interpersonal interactions.<n>We introduce ToMMA, a novel multi-agent framework for dialogue generation guided by causal Theory of Mind.<n>We present CToMPersu, a multi-domain, multi-turn persuasive dialogue dataset.
arXiv Detail & Related papers (2025-02-28T18:28:16Z) - How Performance Pressure Influences AI-Assisted Decision Making [57.53469908423318]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI [19.675489660806942]
Generative AI presents a new risk profile of persuasion due to reciprocal exchange and prolonged interactions.
This has led to growing concerns about harms from AI persuasion and how they can be mitigated.
Existing harm mitigation approaches prioritise harms from the outcome of persuasion over harms from the process of persuasion.
arXiv Detail & Related papers (2024-04-23T14:07:20Z) - Artificial Influence: An Analysis Of AI-Driven Persuasion [0.0]
We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future.
We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.
arXiv Detail & Related papers (2023-03-15T16:05:11Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Strategic Argumentation Dialogues for Persuasion: Framework and
Experiments Based on Modelling the Beliefs and Concerns of the Persuadee [6.091096843566857]
Two key dimensions for determining whether an argument is good in a particular dialogue are the degree to which the intended audience believes the argument and counterarguments, and the impact that the argument has on the concerns of the intended audience.
We present a framework for modelling persuadees in terms of their beliefs and concerns, and for harnessing these models in optimizing the choice of move in persuasion dialogues.
arXiv Detail & Related papers (2021-01-28T08:49:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.