From Pixels to Personas: Investigating and Modeling Self-Anthropomorphism in Human-Robot Dialogues
- URL: http://arxiv.org/abs/2410.03870v1
- Date: Fri, 4 Oct 2024 19:06:24 GMT
- Title: From Pixels to Personas: Investigating and Modeling Self-Anthropomorphism in Human-Robot Dialogues
- Authors: Yu Li, Devamanyu Hazarika, Di Jin, Julia Hirschberg, Yang Liu,
- Abstract summary: Self-anthropomorphism in robots manifests itself through their display of human-like characteristics in dialogue, such as expressing preferences and emotions.
We show significant differences in these two types of responses and propose transitioning from one type to the other.
We introduce Pix2Persona, a novel dataset aimed at developing ethical and engaging AI systems in various embodiments.
- Score: 29.549159419088294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-anthropomorphism in robots manifests itself through their display of human-like characteristics in dialogue, such as expressing preferences and emotions. Our study systematically analyzes self-anthropomorphic expression within various dialogue datasets, outlining the contrasts between self-anthropomorphic and non-self-anthropomorphic responses in dialogue systems. We show significant differences in these two types of responses and propose transitioning from one type to the other. We also introduce Pix2Persona, a novel dataset aimed at developing ethical and engaging AI systems in various embodiments. This dataset preserves the original dialogues from existing corpora and enhances them with paired responses: self-anthropomorphic and non-self-anthropomorphic for each original bot response. Our work not only uncovers a new category of bot responses that were previously under-explored but also lays the groundwork for future studies about dynamically adjusting self-anthropomorphism levels in AI systems to align with ethical standards and user expectations.
Related papers
- From "AI" to Probabilistic Automation: How Does Anthropomorphization of Technical Systems Descriptions Influence Trust? [10.608535718084926]
This paper investigates the influence of anthropomorphized descriptions of so-called "AI" (artificial intelligence) systems on people's self-assessment of trust in the system.
We find that participants are no more likely to trust anthropomorphized over de-anthropomorphized product descriptions overall.
arXiv Detail & Related papers (2024-04-08T17:01:09Z) - Personality-affected Emotion Generation in Dialog Systems [67.40609683389947]
We propose a new task, Personality-affected Emotion Generation, to generate emotion based on the personality given to the dialog system.
We analyze the challenges in this task, i.e., (1) heterogeneously integrating personality and emotional factors and (2) extracting multi-granularity emotional information in the dialog context.
Results suggest that by adopting our method, the emotion generation performance is improved by 13% in macro-F1 and 5% in weighted-F1 from the BERT-base model.
arXiv Detail & Related papers (2024-04-03T08:48:50Z) - ChatGPT Role-play Dataset: Analysis of User Motives and Model Naturalness [4.564433526993029]
We study how ChatGPT behaves during conversations in different settings by analyzing its interactions in both a normal way and a role-play setting.
Our study highlights the diversity of user motives when interacting with ChatGPT and variable AI naturalness, showing not only the nuanced dynamics of natural conversations between humans and AI, but also providing new avenues for improving the effectiveness of human-AI communication.
arXiv Detail & Related papers (2024-03-26T22:01:13Z) - Promptable Behaviors: Personalizing Multi-Objective Rewards from Human
Preferences [53.353022588751585]
We present Promptable Behaviors, a novel framework that facilitates efficient personalization of robotic agents to diverse human preferences.
We introduce three distinct methods to infer human preferences by leveraging different types of interactions.
We evaluate the proposed method in personalized object-goal navigation and flee navigation tasks in ProcTHOR and RoboTHOR.
arXiv Detail & Related papers (2023-12-14T21:00:56Z) - Mirages: On Anthropomorphism in Dialogue Systems [12.507948345088135]
We discuss the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise.
We recommend that future efforts towards developing dialogue systems take particular care in their design, development, release, and description.
arXiv Detail & Related papers (2023-05-16T20:50:46Z) - Think Twice: A Human-like Two-stage Conversational Agent for Emotional Response Generation [16.659457455269127]
We propose a two-stage conversational agent for the generation of emotional dialogue.
First, a dialogue model trained without the emotion-annotated dialogue corpus generates a prototype response that meets the contextual semantics.
Secondly, the first-stage prototype is modified by a controllable emotion refiner with the empathy hypothesis.
arXiv Detail & Related papers (2023-01-12T10:03:56Z) - Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in
Dialog Systems [64.10696852552103]
Highly anthropomorphic responses might make users uncomfortable or implicitly deceive them into thinking they are interacting with a human.
We collect human ratings on the feasibility of approximately 900 two-turn dialogs sampled from 9 diverse data sources.
arXiv Detail & Related papers (2022-10-22T12:10:44Z) - Interaction Replica: Tracking Human-Object Interaction and Scene Changes From Human Motion [48.982957332374866]
Modeling changes caused by humans is essential for building digital twins.
Our method combines visual localization of humans in the scene with contact-based reasoning about human-scene interactions from IMU data.
Our code, data and model are available on our project page at http://virtualhumans.mpi-inf.mpg.de/ireplica/.
arXiv Detail & Related papers (2022-05-05T17:58:06Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Revealing Persona Biases in Dialogue Systems [64.96908171646808]
We present the first large-scale study on persona biases in dialogue systems.
We conduct analyses on personas of different social classes, sexual orientations, races, and genders.
In our studies of the Blender and DialoGPT dialogue systems, we show that the choice of personas can affect the degree of harms in generated responses.
arXiv Detail & Related papers (2021-04-18T05:44:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.