A Study on the Manifestation of Trust in Speech
- URL: http://arxiv.org/abs/2102.09370v1
- Date: Tue, 9 Feb 2021 13:08:54 GMT
- Title: A Study on the Manifestation of Trust in Speech
- Authors: Lara Gauder, Leonardo Pepino, Pablo Riera, Silvina Brussino, Jazm\'in
Vidal, Agust\'in Gravano, Luciana Ferrer
- Abstract summary: We explore the feasibility of automatically detecting the level of trust that a user has on a virtual assistant (VA) based on their speech.
We developed a novel protocol for collecting speech data from subjects induced to have different degrees of trust in the skills of a VA.
We show clear evidence that the protocol effectively succeeded in influencing subjects into the desired mental state of either trusting or distrusting the agent's skills.
- Score: 12.057694908317991
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Research has shown that trust is an essential aspect of human-computer
interaction directly determining the degree to which the person is willing to
use a system. An automatic prediction of the level of trust that a user has on
a certain system could be used to attempt to correct potential distrust by
having the system take relevant actions like, for example, apologizing or
explaining its decisions. In this work, we explore the feasibility of
automatically detecting the level of trust that a user has on a virtual
assistant (VA) based on their speech. We developed a novel protocol for
collecting speech data from subjects induced to have different degrees of trust
in the skills of a VA. The protocol consists of an interactive session where
the subject is asked to respond to a series of factual questions with the help
of a virtual assistant. In order to induce subjects to either trust or distrust
the VA's skills, they are first informed that the VA was previously rated by
other users as being either good or bad; subsequently, the VA answers the
subjects' questions consistently to its alleged abilities. All interactions are
speech-based, with subjects and VAs communicating verbally, which allows the
recording of speech produced under different trust conditions. Using this
protocol, we collected a speech corpus in Argentine Spanish. We show clear
evidence that the protocol effectively succeeded in influencing subjects into
the desired mental state of either trusting or distrusting the agent's skills,
and present results of a perceptual study of the degree of trust performed by
expert listeners. Finally, we found that the subject's speech can be used to
detect which type of VA they were using, which could be considered a proxy for
the user's trust toward the VA's abilities, with an accuracy up to 76%,
compared to a random baseline of 50%.
Related papers
- Explainable Attribute-Based Speaker Verification [12.941187430993796]
We propose an attribute-based explainable speaker verification (SV) system.
It identifies speakers by comparing personal attributes such as gender, nationality, and age extracted automatically from voice recordings.
We believe this approach better aligns with human reasoning, making it more understandable than traditional methods.
arXiv Detail & Related papers (2024-05-30T08:04:28Z) - Humans, AI, and Context: Understanding End-Users' Trust in a Real-World
Computer Vision Application [22.00514030715286]
We provide a holistic and nuanced understanding of trust in AI through a qualitative case study of a real-world computer vision application.
We find participants perceived the app as trustworthy and trusted it, but selectively accepted app outputs after engaging in verification behaviors.
We discuss the implications of our findings and provide recommendations for future research on trust in AI.
arXiv Detail & Related papers (2023-05-15T12:27:02Z) - User-Centered Security in Natural Language Processing [0.7106986689736825]
dissertation proposes a framework of user-centered security in Natural Language Processing (NLP)
It focuses on two security domains within NLP with great public interest.
arXiv Detail & Related papers (2023-01-10T22:34:19Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised
Speech Representation Disentanglement for One-shot Voice Conversion [54.29557210925752]
One-shot voice conversion can be effectively achieved by speech representation disentanglement.
We employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training.
Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations.
arXiv Detail & Related papers (2021-06-18T13:50:38Z) - Adversarial Disentanglement of Speaker Representation for
Attribute-Driven Privacy Preservation [17.344080729609026]
We introduce the concept of attribute-driven privacy preservation in speaker voice representation.
It allows a person to hide one or more personal aspects to a potential malicious interceptor and to the application provider.
We propose an adversarial autoencoding method that disentangles in the voice representation a given speaker attribute thus allowing its concealment.
arXiv Detail & Related papers (2020-12-08T14:47:23Z) - Speaker De-identification System using Autoencoders and Adversarial
Training [58.720142291102135]
We propose a speaker de-identification system based on adversarial training and autoencoders.
Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system.
arXiv Detail & Related papers (2020-11-09T19:22:05Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z) - Detecting Distrust Towards the Skills of a Virtual Assistant Using
Speech [8.992916975952477]
We study the feasibility of automatically detecting the level of trust that a user has on a virtual assistant (VA) based on their speech.
We find that the subject's speech can be used to detect which type of VA they were using, which could be considered a proxy for the user's trust toward the VA's abilities.
arXiv Detail & Related papers (2020-07-30T19:56:17Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z) - Speech Enhancement using Self-Adaptation and Multi-Head Self-Attention [70.82604384963679]
This paper investigates a self-adaptation method for speech enhancement using auxiliary speaker-aware features.
We extract a speaker representation used for adaptation directly from the test utterance.
arXiv Detail & Related papers (2020-02-14T05:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.