Soliciting Human-in-the-Loop User Feedback for Interactive Machine
Learning Reduces User Trust and Impressions of Model Accuracy
- URL: http://arxiv.org/abs/2008.12735v1
- Date: Fri, 28 Aug 2020 16:46:41 GMT
- Title: Soliciting Human-in-the-Loop User Feedback for Interactive Machine
Learning Reduces User Trust and Impressions of Model Accuracy
- Authors: Donald R. Honeycutt, Mahsan Nourani, Eric D. Ragan
- Abstract summary: Mixed-initiative systems allow users to interactively provide feedback to improve system performance.
Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy.
- Score: 8.11839312231511
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mixed-initiative systems allow users to interactively provide feedback to
potentially improve system performance. Human feedback can correct model errors
and update model parameters to dynamically adapt to changing data.
Additionally, many users desire the ability to have a greater level of control
and fix perceived flaws in systems they rely on. However, how the ability to
provide feedback to autonomous systems influences user trust is a largely
unexplored area of research. Our research investigates how the act of providing
feedback can affect user understanding of an intelligent system and its
accuracy. We present a controlled experiment using a simulated object detection
system with image data to study the effects of interactive feedback collection
on user impressions. The results show that providing human-in-the-loop feedback
lowered both participants' trust in the system and their perception of system
accuracy, regardless of whether the system accuracy improved in response to
their feedback. These results highlight the importance of considering the
effects of allowing end-user feedback on user trust when designing intelligent
systems.
Related papers
- Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - Causal Estimation of User Learning in Personalized Systems [5.016998307223021]
We introduce a non-parametric causal model of user actions in a personalized system.
We show that the Cookie-Cookie-Day experiment, designed for the measurement of the user learning effect, is biased when there is personalization.
We derive new experimental designs that intervene in the personalization system to generate the variation necessary to separately identify the causal effect mediated through user learning and personalization.
arXiv Detail & Related papers (2023-06-01T09:37:43Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning [69.42679922160684]
We propose feedback-weighted learning based on importance sampling to improve upon an initial supervised system using binary user feedback.
Our work opens the prospect to exploit interactions with real users and improve conversational systems after deployment.
arXiv Detail & Related papers (2020-11-01T19:50:34Z) - The Role of Domain Expertise in User Trust and the Impact of First
Impressions with Intelligent Systems [7.3817525365473875]
Domain-specific intelligent systems are meant to help system users in their decision-making process.
Prior domain knowledge can affect user trust and confidence in detecting system errors.
Our research explores the relationship between ordering bias and domain expertise when encountering errors in intelligent systems.
arXiv Detail & Related papers (2020-08-20T17:41:02Z) - Assisted Perception: Optimizing Observations to Communicate State [112.40598205054994]
We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments.
We synthesize new observations that lead to more accurate internal state estimates when processed by the user.
arXiv Detail & Related papers (2020-08-06T19:08:05Z) - Personalization in Human-AI Teams: Improving the Compatibility-Accuracy
Tradeoff [0.0]
We study the trade-off between improving the system's accuracy following an update and the compatibility of the updated system with prior user experience.
We show that by personalizing the loss function to specific users, in some cases it is possible to improve the compatibility-accuracy trade-off with respect to these users.
arXiv Detail & Related papers (2020-04-05T19:35:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.