The Manipulation Problem: Conversational AI as a Threat to Epistemic
Agency
- URL: http://arxiv.org/abs/2306.11748v1
- Date: Mon, 19 Jun 2023 04:09:16 GMT
- Title: The Manipulation Problem: Conversational AI as a Threat to Epistemic
Agency
- Authors: Louis Rosenberg
- Abstract summary: The technology of Conversational AI has made significant advancements over the last eighteen months.
conversational agents are likely to be deployed in the near future that are designed to pursue targeted influence objectives.
Sometimes referred to as the "AI Manipulation Problem," the emerging risk is that consumers will unwittingly engage in real-time dialog with predatory AI agents.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The technology of Conversational AI has made significant advancements over
the last eighteen months. As a consequence, conversational agents are likely to
be deployed in the near future that are designed to pursue targeted influence
objectives. Sometimes referred to as the "AI Manipulation Problem," the
emerging risk is that consumers will unwittingly engage in real-time dialog
with predatory AI agents that can skillfully persuade them to buy particular
products, believe particular pieces of misinformation, or fool them into
revealing sensitive personal data. For many users, current systems like ChatGPT
and LaMDA feel safe because they are primarily text-based, but the industry is
already shifting towards real-time voice and photorealistic digital personas
that look, move, and express like real people. This will enable the deployment
of agenda-driven Virtual Spokespeople (VSPs) that will be highly persuasive
through real-time adaptive influence. This paper explores the manipulative
tactics that are likely to be deployed through conversational AI agents, the
unique threats such agents pose to the epistemic agency of human users, and the
emerging need for policymakers to protect against the most likely predatory
practices.
Related papers
- On the Feasibility of Fully AI-automated Vishing Attacks [4.266087132777785]
A vishing attack is a form of social engineering where attackers use phone calls to deceive individuals into disclosing sensitive information.
We study the potential for vishing attacks to escalate with the advent of AI.
We introduce ViKing, an AI-powered vishing system developed using publicly available AI technology.
arXiv Detail & Related papers (2024-09-20T10:47:09Z) - The Voice: Lessons on Trustworthy Conversational Agents from "Dune" [0.7832189413179361]
We explore how generative AI provides a way to implement individualized influence at industrial scales.
If employed by malicious actors, they risk becoming powerful tools for shaping public opinion, sowing discord, and undermining organizations from companies to governments.
arXiv Detail & Related papers (2024-07-10T05:38:31Z) - Deception and Manipulation in Generative AI [0.0]
I argue that AI-generated content should be subject to stricter standards against deception and manipulation.
I propose two measures to guard against AI deception and manipulation.
arXiv Detail & Related papers (2024-01-20T21:54:37Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Decoding the Threat Landscape : ChatGPT, FraudGPT, and WormGPT in Social Engineering Attacks [0.0]
Generative AI models have revolutionized the field of cyberattacks, empowering malicious actors to craft convincing and personalized phishing lures.
These models, ChatGPT, FraudGPT, and WormGPT, have augmented existing threats and ushered in new dimensions of risk.
To counter these threats, we outline a range of strategies, including traditional security measures, AI-powered security solutions, and collaborative approaches in cybersecurity.
arXiv Detail & Related papers (2023-10-09T10:31:04Z) - Artificial Influence: An Analysis Of AI-Driven Persuasion [0.0]
We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future.
We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.
arXiv Detail & Related papers (2023-03-15T16:05:11Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Adversarial Interaction Attack: Fooling AI to Misinterpret Human
Intentions [46.87576410532481]
We show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise.
Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions.
Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
arXiv Detail & Related papers (2021-01-17T16:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.