The Role of Domain Expertise in User Trust and the Impact of First
Impressions with Intelligent Systems
- URL: http://arxiv.org/abs/2008.09100v1
- Date: Thu, 20 Aug 2020 17:41:02 GMT
- Title: The Role of Domain Expertise in User Trust and the Impact of First
Impressions with Intelligent Systems
- Authors: Mahsan Nourani, Joanie T. King, Eric D. Ragan
- Abstract summary: Domain-specific intelligent systems are meant to help system users in their decision-making process.
Prior domain knowledge can affect user trust and confidence in detecting system errors.
Our research explores the relationship between ordering bias and domain expertise when encountering errors in intelligent systems.
- Score: 7.3817525365473875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain-specific intelligent systems are meant to help system users in their
decision-making process. Many systems aim to simultaneously support different
users with varying levels of domain expertise, but prior domain knowledge can
affect user trust and confidence in detecting system errors. While it is also
known that user trust can be influenced by first impressions with intelligent
systems, our research explores the relationship between ordering bias and
domain expertise when encountering errors in intelligent systems. In this
paper, we present a controlled user study to explore the role of domain
knowledge in establishing trust and susceptibility to the influence of first
impressions on user trust. Participants reviewed an explainable image
classifier with a constant accuracy and two different orders of observing
system errors (observing errors in the beginning of usage vs. in the end). Our
findings indicate that encountering errors early-on can cause negative first
impressions for domain experts, negatively impacting their trust over the
course of interactions. However, encountering correct outputs early helps more
knowledgable users to dynamically adjust their trust based on their
observations of system performance. In contrast, novice users suffer from
over-reliance due to their lack of proper knowledge to detect errors.
Related papers
- Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - A Systematic Literature Review of User Trust in AI-Enabled Systems: An
HCI Perspective [0.0]
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption.
This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies.
arXiv Detail & Related papers (2023-04-18T07:58:09Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - A Conceptual Framework for Establishing Trust in Real World Intelligent
Systems [0.0]
Trust in algorithms can be established by letting users interact with the system.
Reflecting features and patterns of human understanding of a domain against algorithmic results can create awareness of such patterns.
Close inspection can be used to decide whether a solution conforms to the expectations or whether it goes beyond the expected.
arXiv Detail & Related papers (2021-04-12T12:58:47Z) - Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning [69.42679922160684]
We propose feedback-weighted learning based on importance sampling to improve upon an initial supervised system using binary user feedback.
Our work opens the prospect to exploit interactions with real users and improve conversational systems after deployment.
arXiv Detail & Related papers (2020-11-01T19:50:34Z) - Soliciting Human-in-the-Loop User Feedback for Interactive Machine
Learning Reduces User Trust and Impressions of Model Accuracy [8.11839312231511]
Mixed-initiative systems allow users to interactively provide feedback to improve system performance.
Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy.
arXiv Detail & Related papers (2020-08-28T16:46:41Z) - More Than Accuracy: Towards Trustworthy Machine Learning Interfaces for
Object Recognition [0.0]
This paper investigates the user experience of visualizations of a machine learning (ML) system that recognizes objects in images.
We exposed users with a background in ML to three visualizations of three systems with different levels of accuracy.
In interviews, we explored how the visualization helped users assess the accuracy of systems in use and how the visualization and the accuracy of the system affected trust and reliance.
arXiv Detail & Related papers (2020-08-05T07:56:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.