The Voice: Lessons on Trustworthy Conversational Agents from "Dune"
- URL: http://arxiv.org/abs/2407.18928v1
- Date: Wed, 10 Jul 2024 05:38:31 GMT
- Title: The Voice: Lessons on Trustworthy Conversational Agents from "Dune"
- Authors: Philip Feldman,
- Abstract summary: We explore how generative AI provides a way to implement individualized influence at industrial scales.
If employed by malicious actors, they risk becoming powerful tools for shaping public opinion, sowing discord, and undermining organizations from companies to governments.
- Score: 0.7832189413179361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The potential for untrustworthy conversational agents presents a significant threat for covert social manipulation. Taking inspiration from Frank Herbert's "Dune", where the Bene Gesserit Sisterhood uses the Voice for influence, manipulation, and control of people, we explore how generative AI provides a way to implement individualized influence at industrial scales. Already, these models can manipulate communication across text, image, speech, and most recently video. They are rapidly becoming affordable enough for any organization of even moderate means to train and deploy. If employed by malicious actors, they risk becoming powerful tools for shaping public opinion, sowing discord, and undermining organizations from companies to governments. As researchers and developers, it is crucial to recognize the potential for such weaponization and to explore strategies for prevention, detection, and defense against these emerging forms of sociotechnical manipulation.
Related papers
- On the Feasibility of Fully AI-automated Vishing Attacks [4.266087132777785]
A vishing attack is a form of social engineering where attackers use phone calls to deceive individuals into disclosing sensitive information.
We study the potential for vishing attacks to escalate with the advent of AI.
We introduce ViKing, an AI-powered vishing system developed using publicly available AI technology.
arXiv Detail & Related papers (2024-09-20T10:47:09Z) - Deception and Manipulation in Generative AI [0.0]
I argue that AI-generated content should be subject to stricter standards against deception and manipulation.
I propose two measures to guard against AI deception and manipulation.
arXiv Detail & Related papers (2024-01-20T21:54:37Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - Speech-Gesture GAN: Gesture Generation for Robots and Embodied Agents [5.244401764969407]
Embodied agents, in the form of virtual agents or social robots, are rapidly becoming more widespread.
We propose a novel framework that can generate sequences of joint angles from the speech text and speech audio utterances.
arXiv Detail & Related papers (2023-09-17T18:46:25Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - The Manipulation Problem: Conversational AI as a Threat to Epistemic
Agency [0.0]
The technology of Conversational AI has made significant advancements over the last eighteen months.
conversational agents are likely to be deployed in the near future that are designed to pursue targeted influence objectives.
Sometimes referred to as the "AI Manipulation Problem," the emerging risk is that consumers will unwittingly engage in real-time dialog with predatory AI agents.
arXiv Detail & Related papers (2023-06-19T04:09:16Z) - Artificial Influence: An Analysis Of AI-Driven Persuasion [0.0]
We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future.
We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.
arXiv Detail & Related papers (2023-03-15T16:05:11Z) - Certifiably Robust Policy Learning against Adversarial Communication in
Multi-agent Systems [51.6210785955659]
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.
However, when deploying trained communicative agents in a real-world application where noise and potential attackers exist, the safety of communication-based policies becomes a severe issue that is underexplored.
In this work, we consider an environment with $N$ agents, where the attacker may arbitrarily change the communication from any $CfracN-12$ agents to a victim agent.
arXiv Detail & Related papers (2022-06-21T07:32:18Z) - Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria [57.74495091445414]
Social deduction games offer an avenue to study how individuals might learn to synthesize potentially unreliable information about others.
In this work, we present Hidden Agenda, a two-team social deduction game that provides a 2D environment for studying learning agents in scenarios of unknown team alignment.
Reinforcement learning agents trained in Hidden Agenda show that agents can learn a variety of behaviors, including partnering and voting without need for communication in natural language.
arXiv Detail & Related papers (2022-01-05T20:54:10Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.