Mirages: On Anthropomorphism in Dialogue Systems
- URL: http://arxiv.org/abs/2305.09800v2
- Date: Mon, 23 Oct 2023 09:26:22 GMT
- Title: Mirages: On Anthropomorphism in Dialogue Systems
- Authors: Gavin Abercrombie, Amanda Cercas Curry, Tanvi Dinkar, Verena Rieser,
Zeerak Talat
- Abstract summary: We discuss the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise.
We recommend that future efforts towards developing dialogue systems take particular care in their design, development, release, and description.
- Score: 12.507948345088135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated dialogue or conversational systems are anthropomorphised by
developers and personified by users. While a degree of anthropomorphism may be
inevitable due to the choice of medium, conscious and unconscious design
choices can guide users to personify such systems to varying degrees.
Encouraging users to relate to automated systems as if they were human can lead
to high risk scenarios caused by over-reliance on their outputs. As a result,
natural language processing researchers have investigated the factors that
induce personification and develop resources to mitigate such effects. However,
these efforts are fragmented, and many aspects of anthropomorphism have yet to
be explored. In this paper, we discuss the linguistic factors that contribute
to the anthropomorphism of dialogue systems and the harms that can arise,
including reinforcing gender stereotypes and notions of acceptable language. We
recommend that future efforts towards developing dialogue systems take
particular care in their design, development, release, and description; and
attend to the many linguistic cues that can elicit personification by users.
Related papers
- From Pixels to Personas: Investigating and Modeling Self-Anthropomorphism in Human-Robot Dialogues [29.549159419088294]
Self-anthropomorphism in robots manifests itself through their display of human-like characteristics in dialogue, such as expressing preferences and emotions.
We show significant differences in these two types of responses and propose transitioning from one type to the other.
We introduce Pix2Persona, a novel dataset aimed at developing ethical and engaging AI systems in various embodiments.
arXiv Detail & Related papers (2024-10-04T19:06:24Z) - Humane Speech Synthesis through Zero-Shot Emotion and Disfluency Generation [0.6964027823688135]
Modern conversational systems lack emotional depth and disfluent characteristic of human interactions.
To address this shortcoming, we have designed an innovative speech synthesis pipeline.
Within this framework, a cutting-edge language model introduces both human-like emotion and disfluencies in a zero-shot setting.
arXiv Detail & Related papers (2024-03-31T00:38:02Z) - Attribution and Alignment: Effects of Local Context Repetition on
Utterance Production and Comprehension in Dialogue [6.886248462185439]
Repetition is typically penalised when evaluating language model generations.
Humans use local and partner specific repetitions; these are preferred by human users and lead to more successful communication in dialogue.
In this study, we evaluate (a) whether language models produce human-like levels of repetition in dialogue, and (b) what are the processing mechanisms related to lexical re-use they use during comprehension.
arXiv Detail & Related papers (2023-11-21T23:50:33Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - When Large Language Models Meet Personalization: Perspectives of
Challenges and Opportunities [60.5609416496429]
The capability of large language models has been dramatically improved.
Such a major leap-forward in general AI capacity will change the pattern of how personalization is conducted.
By leveraging large language models as general-purpose interface, personalization systems may compile user requests into plans.
arXiv Detail & Related papers (2023-07-31T02:48:56Z) - Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in
Dialog Systems [64.10696852552103]
Highly anthropomorphic responses might make users uncomfortable or implicitly deceive them into thinking they are interacting with a human.
We collect human ratings on the feasibility of approximately 900 two-turn dialogs sampled from 9 diverse data sources.
arXiv Detail & Related papers (2022-10-22T12:10:44Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - A Review of Dialogue Systems: From Trained Monkeys to Stochastic Parrots [0.0]
We aim to deploy artificial intelligence to build automated dialogue agents that can converse with humans.
We present a broad overview of methods developed to build dialogue systems over the years.
arXiv Detail & Related papers (2021-11-02T08:07:55Z) - Revealing Persona Biases in Dialogue Systems [64.96908171646808]
We present the first large-scale study on persona biases in dialogue systems.
We conduct analyses on personas of different social classes, sexual orientations, races, and genders.
In our studies of the Blender and DialoGPT dialogue systems, we show that the choice of personas can affect the degree of harms in generated responses.
arXiv Detail & Related papers (2021-04-18T05:44:41Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.