Conceptual Metaphors Impact Perceptions of Human-AI Collaboration
- URL: http://arxiv.org/abs/2008.02311v1
- Date: Wed, 5 Aug 2020 18:39:56 GMT
- Title: Conceptual Metaphors Impact Perceptions of Human-AI Collaboration
- Authors: Pranav Khadpe, Ranjay Krishna, Li Fei-Fei, Jeffrey Hancock, Michael
Bernstein
- Abstract summary: We find that metaphors that signal low competence lead to better evaluations of the agent than metaphors that signal high competence.
A second study confirms that intention to adopt decreases rapidly as competence projected by the metaphor increases.
These results suggest that projecting competence may help attract new users, but those users may discard the agent unless it can quickly correct with a lower competence metaphor.
- Score: 29.737986509769808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the emergence of conversational artificial intelligence (AI) agents, it
is important to understand the mechanisms that influence users' experiences of
these agents. We study a common tool in the designer's toolkit: conceptual
metaphors. Metaphors can present an agent as akin to a wry teenager, a toddler,
or an experienced butler. How might a choice of metaphor influence our
experience of the AI agent? Sampling metaphors along the dimensions of warmth
and competence---defined by psychological theories as the primary axes of
variation for human social perception---we perform a study (N=260) where we
manipulate the metaphor, but not the behavior, of a Wizard-of-Oz conversational
agent. Following the experience, participants are surveyed about their
intention to use the agent, their desire to cooperate with the agent, and the
agent's usability. Contrary to the current tendency of designers to use high
competence metaphors to describe AI products, we find that metaphors that
signal low competence lead to better evaluations of the agent than metaphors
that signal high competence. This effect persists despite both high and low
competence agents featuring human-level performance and the wizards being blind
to condition. A second study confirms that intention to adopt decreases rapidly
as competence projected by the metaphor increases. In a third study, we assess
effects of metaphor choices on potential users' desire to try out the system
and find that users are drawn to systems that project higher competence and
warmth. These results suggest that projecting competence may help attract new
users, but those users may discard the agent unless it can quickly correct with
a lower competence metaphor. We close with a retrospective analysis that finds
similar patterns between metaphors and user attitudes towards past
conversational agents such as Xiaoice, Replika, Woebot, Mitsuku, and Tay.
Related papers
- Can Generative Agents Predict Emotion? [0.0]
Large Language Models (LLMs) have demonstrated a number of human-like abilities, however the empathic understanding and emotional state of LLMs is yet to be aligned to that of humans.
We investigate how the emotional state of generative LLM agents evolves as they perceive new events, introducing a novel architecture in which new experiences are compared to past memories.
arXiv Detail & Related papers (2024-02-06T18:39:43Z) - Assistant, Parrot, or Colonizing Loudspeaker? ChatGPT Metaphors for
Developing Critical AI Literacies [0.9012198585960443]
This study explores how discussing metaphors for AI can help build awareness of the frames that shape our understanding of AI systems.
We analyzed metaphors from a range of sources, and reflected on them individually according to seven questions.
We explored each metaphor along the dimension whether or not it was promoting anthropomorphizing, and to what extent such metaphors imply that AI is sentient.
arXiv Detail & Related papers (2024-01-15T15:15:48Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Conceptualizing the Relationship between AI Explanations and User Agency [0.9051087836811617]
We analyze the relationship between agency and explanations through a user-centric lens through case studies and thought experiments.
We find that explanation serves as one of several possible first steps for agency by allowing the user convert forethought to outcome in a more effective manner in future interactions.
arXiv Detail & Related papers (2023-12-05T23:56:05Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - My Actions Speak Louder Than Your Words: When User Behavior Predicts
Their Beliefs about Agents' Attributes [5.893351309010412]
Behavioral science suggests that people sometimes use irrelevant information.
We identify an instance of this phenomenon, where users who experience better outcomes in a human-agent interaction systematically rated the agent as having better abilities, being more benevolent, and exhibiting greater integrity in a post hoc assessment than users who experienced worse outcome -- which were the result of their own behavior -- with the same agent.
Our analyses suggest the need for augmentation of models so that they account for such biased perceptions as well as mechanisms so that agents can detect and even actively work to correct this and similar biases of users.
arXiv Detail & Related papers (2023-01-21T21:26:32Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.