Artificial Influence: An Analysis Of AI-Driven Persuasion
- URL: http://arxiv.org/abs/2303.08721v1
- Date: Wed, 15 Mar 2023 16:05:11 GMT
- Title: Artificial Influence: An Analysis Of AI-Driven Persuasion
- Authors: Matthew Burtell and Thomas Woodside
- Abstract summary: We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future.
We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Persuasion is a key aspect of what it means to be human, and is central to
business, politics, and other endeavors. Advancements in artificial
intelligence (AI) have produced AI systems that are capable of persuading
humans to buy products, watch videos, click on search results, and more. Even
systems that are not explicitly designed to persuade may do so in practice. In
the future, increasingly anthropomorphic AI systems may form ongoing
relationships with users, increasing their persuasive power. This paper
investigates the uncertain future of persuasive AI systems. We examine ways
that AI could qualitatively alter our relationship to and views regarding
persuasion by shifting the balance of persuasive power, allowing personalized
persuasion to be deployed at scale, powering misinformation campaigns, and
changing the way humans can shape their own discourse. We consider ways
AI-driven persuasion could differ from human-driven persuasion. We warn that
ubiquitous highlypersuasive AI systems could alter our information environment
so significantly so as to contribute to a loss of human control of our own
future. In response, we examine several potential responses to AI-driven
persuasion: prohibition, identification of AI agents, truthful AI, and legal
remedies. We conclude that none of these solutions will be airtight, and that
individuals and governments will need to take active steps to guard against the
most pernicious effects of persuasive AI.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI [19.675489660806942]
Generative AI presents a new risk profile of persuasion due to reciprocal exchange and prolonged interactions.
This has led to growing concerns about harms from AI persuasion and how they can be mitigated.
Existing harm mitigation approaches prioritise harms from the outcome of persuasion over harms from the process of persuasion.
arXiv Detail & Related papers (2024-04-23T14:07:20Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - On the Influence of Explainable AI on Automation Bias [0.0]
We aim to shed light on the potential to influence automation bias by explainable AI (XAI)
We conduct an online experiment with regard to hotel review classifications and discuss first results.
arXiv Detail & Related papers (2022-04-19T12:54:23Z) - Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs
for Centaurs [22.52332536886295]
We present a novel formulation of the interaction between the human and the AI as a sequential game.
We show that in this case the AI's problem of helping bounded-rational humans make better decisions reduces to a Bayes-adaptive POMDP.
We discuss ways in which the machine can learn to improve upon its own limitations as well with the help of the human.
arXiv Detail & Related papers (2022-04-03T21:00:51Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.