Robotic Speech Synthesis: Perspectives on Interactions, Scenarios, and
Ethics
- URL: http://arxiv.org/abs/2203.09599v1
- Date: Thu, 17 Mar 2022 20:24:17 GMT
- Title: Robotic Speech Synthesis: Perspectives on Interactions, Scenarios, and
Ethics
- Authors: Yuanchao Li, Catherine Lai
- Abstract summary: We discuss the difficulties of synthesizing non-verbal and interaction-oriented speech signals, particularly backchannels.
We present the findings of relevant literature and our prior work, trying to bring the attention of human-robot interaction researchers to design better conversational robots.
- Score: 2.6959411243976175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, many works have investigated the feasibility of
conversational robots for performing specific tasks, such as healthcare and
interview. Along with this development comes a practical issue: how should we
synthesize robotic voices to meet the needs of different situations? In this
paper, we discuss this issue from three perspectives: 1) the difficulties of
synthesizing non-verbal and interaction-oriented speech signals, particularly
backchannels; 2) the scenario classification for robotic voice synthesis; 3)
the ethical issues regarding the design of robot voice for its emotion and
identity. We present the findings of relevant literature and our prior work,
trying to bring the attention of human-robot interaction researchers to design
better conversational robots in the future.
Related papers
- Dialogue with Robots: Proposals for Broadening Participation and Research in the SLIVAR Community [57.56212633174706]
The ability to interact with machines using natural human language is becoming commonplace, but expected.
In this paper, we chronicle the recent history of this growing field of spoken dialogue with robots.
We offer the community three proposals, the first focused on education, the second on benchmarks, and the third on the modeling of language when it comes to spoken interaction with robots.
arXiv Detail & Related papers (2024-04-01T15:03:27Z) - Ain't Misbehavin' -- Using LLMs to Generate Expressive Robot Behavior in
Conversations with the Tabletop Robot Haru [9.2526849536751]
We introduce a fully-automated conversation system that leverages large language models (LLMs) to generate robot responses with expressive behaviors.
We conduct a pilot study where volunteers chat with a social robot using our proposed system, and we analyze their feedback, conducting a rigorous error analysis of chat transcripts.
Most negative feedback was due to automatic speech recognition (ASR) errors which had limited impact on conversations.
arXiv Detail & Related papers (2024-02-18T12:35:52Z) - Developing Social Robots with Empathetic Non-Verbal Cues Using Large
Language Models [2.5489046505746704]
We design and label four types of empathetic non-verbal cues, abbreviated as SAFE: Speech, Action (gesture), Facial expression, and Emotion, in a social robot.
Preliminary results show distinct patterns in the robot's responses, such as a preference for calm and positive social emotions like 'joy' and 'lively', and frequent nodding gestures.
Our work lays the groundwork for future studies on human-robot interactions, emphasizing the essential role of both verbal and non-verbal cues in creating social and empathetic robots.
arXiv Detail & Related papers (2023-08-31T08:20:04Z) - Interactive Conversational Head Generation [68.76774230274076]
We introduce a new conversation head generation benchmark for synthesizing behaviors of a single interlocutor in a face-to-face conversation.
The capability to automatically synthesize interlocutors which can participate in long and multi-turn conversations is vital and offer benefits for various applications.
arXiv Detail & Related papers (2023-07-05T08:06:26Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [64.77385130665128]
The aim of this workshop is to foster the exchange of insights on past and ongoing research towards effective and long-lasting collaborations between humans and robots.
We particularly focus on AI techniques required to implement autonomous and proactive interactions.
arXiv Detail & Related papers (2022-06-07T11:12:45Z) - Towards a Real-time Measure of the Perception of Anthropomorphism in
Human-robot Interaction [5.112850258732114]
We conducted an online human-robot interaction experiment in an educational use-case scenario.
43 English-speaking participants took part in the study.
We found that the degree of subjective and objective perception of anthropomorphism positively correlates with acoustic-prosodic entrainment.
arXiv Detail & Related papers (2022-01-24T11:10:37Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [65.60507052509406]
The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter-and multi-disciplinary nature of the relationships between people and robots.
arXiv Detail & Related papers (2021-03-23T16:52:12Z) - Let's be friends! A rapport-building 3D embodied conversational agent
for the Human Support Robot [0.0]
Partial subtle mirroring of nonverbal behaviors during conversations (also known as mimicking or parallel empathy) is essential for rapport building.
Our research question is whether integrating an ECA able to mirror its interlocutor's facial expressions and head movements with a human-service robot will improve the user's experience.
Our contribution is the complex integration of an expressive ECA, able to track its interlocutor's face, and to mirror his/her facial expressions and head movements in real time, integrated with a human support robot.
arXiv Detail & Related papers (2021-03-08T01:02:41Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.