Anthropomorphization of AI: Opportunities and Risks
- URL: http://arxiv.org/abs/2305.14784v1
- Date: Wed, 24 May 2023 06:39:45 GMT
- Title: Anthropomorphization of AI: Opportunities and Risks
- Authors: Ameet Deshpande, Tanmay Rajpurohit, Karthik Narasimhan, Ashwin Kalyan
- Abstract summary: Anthropomorphization is the tendency to attribute human-like traits to non-human entities.
With widespread adoption of AI systems, the tendency for users to anthropomorphize it increases significantly.
We study the objective legal implications, as analyzed through the lens of the recent blueprint of AI bill of rights.
- Score: 24.137106159123892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anthropomorphization is the tendency to attribute human-like traits to
non-human entities. It is prevalent in many social contexts -- children
anthropomorphize toys, adults do so with brands, and it is a literary device.
It is also a versatile tool in science, with behavioral psychology and
evolutionary biology meticulously documenting its consequences. With widespread
adoption of AI systems, and the push from stakeholders to make it human-like
through alignment techniques, human voice, and pictorial avatars, the tendency
for users to anthropomorphize it increases significantly. We take a dyadic
approach to understanding this phenomenon with large language models (LLMs) by
studying (1) the objective legal implications, as analyzed through the lens of
the recent blueprint of AI bill of rights and the (2) subtle psychological
aspects customization and anthropomorphization. We find that anthropomorphized
LLMs customized for different user bases violate multiple provisions in the
legislative blueprint. In addition, we point out that anthropomorphization of
LLMs affects the influence they can have on their users, thus having the
potential to fundamentally change the nature of human-AI interaction, with
potential for manipulation and negative influence. With LLMs being
hyper-personalized for vulnerable groups like children and patients among
others, our work is a timely and important contribution. We propose a
conservative strategy for the cautious use of anthropomorphization to improve
trustworthiness of AI systems.
Related papers
- Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games [7.504095239018173]
Large Language Model (LLM)-based agents increasingly undertake real-world tasks and engage with human society.
This study investigates how different personas and experimental framings affect these AI agents' altruistic behavior.
Despite being trained on extensive human-generated data, these AI agents cannot accurately predict human decisions.
arXiv Detail & Related papers (2024-10-28T17:47:41Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - From "AI" to Probabilistic Automation: How Does Anthropomorphization of Technical Systems Descriptions Influence Trust? [10.608535718084926]
This paper investigates the influence of anthropomorphized descriptions of so-called "AI" (artificial intelligence) systems on people's self-assessment of trust in the system.
We find that participants are no more likely to trust anthropomorphized over de-anthropomorphized product descriptions overall.
arXiv Detail & Related papers (2024-04-08T17:01:09Z) - Learning Human-like Representations to Enable Learning Human Values [12.628307026004656]
We argue that representational alignment between humans and AI agents facilitates value alignment.
We focus on ethics as one aspect of value alignment and train ML agents using a variety of methods.
arXiv Detail & Related papers (2023-12-21T18:31:33Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Learning to Influence Human Behavior with Offline Reinforcement Learning [70.7884839812069]
We focus on influence in settings where there is a need to capture human suboptimality.
Experiments online with humans is potentially unsafe, and creating a high-fidelity simulator of the environment is often impractical.
We show that offline reinforcement learning can learn to effectively influence suboptimal humans by extending and combining elements of observed human-human behavior.
arXiv Detail & Related papers (2023-03-03T23:41:55Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z) - The Introspective Agent: Interdependence of Strategy, Physiology, and
Sensing for Embodied Agents [51.94554095091305]
We argue for an introspective agent, which considers its own abilities in the context of its environment.
Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment.
arXiv Detail & Related papers (2022-01-02T20:14:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.