Feedback Effect in User Interaction with Intelligent Assistants: Delayed
Engagement, Adaption and Drop-out
- URL: http://arxiv.org/abs/2303.10255v2
- Date: Tue, 18 Apr 2023 15:26:44 GMT
- Title: Feedback Effect in User Interaction with Intelligent Assistants: Delayed
Engagement, Adaption and Drop-out
- Authors: Zidi Xiu, Kai-Chen Cheng, David Q. Sun, Jiannan Lu, Hadas Kotek, Yuhan
Zhang, Paul McCarthy, Christopher Klein, Stephen Pulman, Jason D. Williams
- Abstract summary: This paper identifies and quantifies the feedback effect, a novel component in IA-user interactions.
We show that unhelpful responses from the IA cause users to delay or reduce subsequent interactions.
As users discover the limitations of the IA's understanding and functional capabilities, they learn to adjust the scope and wording of their requests.
- Score: 9.205174767678365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the growing popularity of intelligent assistants (IAs), evaluating IA
quality becomes an increasingly active field of research. This paper identifies
and quantifies the feedback effect, a novel component in IA-user interactions:
how the capabilities and limitations of the IA influence user behavior over
time. First, we demonstrate that unhelpful responses from the IA cause users to
delay or reduce subsequent interactions in the short term via an observational
study. Next, we expand the time horizon to examine behavior changes and show
that as users discover the limitations of the IA's understanding and functional
capabilities, they learn to adjust the scope and wording of their requests to
increase the likelihood of receiving a helpful response from the IA. Our
findings highlight the impact of the feedback effect at both the micro and meso
levels. We further discuss its macro-level consequences: unsatisfactory
interactions continuously reduce the likelihood and diversity of future user
engagements in a feedback loop.
Related papers
- "My Grade is Wrong!": A Contestable AI Framework for Interactive Feedback in Evaluating Student Essays [6.810086342993699]
This paper introduces CAELF, a Contestable AI Empowered LLM Framework for automating interactive feedback.
CAELF allows students to query, challenge, and clarify their feedback by integrating a multi-agent system with computational argumentation.
A case study on 500 critical thinking essays with user studies demonstrates that CAELF significantly improves interactive feedback.
arXiv Detail & Related papers (2024-09-11T17:59:01Z) - Modulating Language Model Experiences through Frictions [56.17593192325438]
Over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities in the long-term.
We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse.
arXiv Detail & Related papers (2024-06-24T16:31:11Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Unveiling the Secrets of Engaging Conversations: Factors that Keep Users
Hooked on Role-Playing Dialog Agents [17.791787477586574]
The degree to which the bot embodies the roles it plays has limited influence on retention rates, while the length of each turn it speaks significantly affects retention rates.
This study sheds light on the critical aspects of user engagement with role-playing models and provides valuable insights for future improvements in the development of large language models for role-playing purposes.
arXiv Detail & Related papers (2024-02-18T09:42:41Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Re-mine, Learn and Reason: Exploring the Cross-modal Semantic
Correlations for Language-guided HOI detection [57.13665112065285]
Human-Object Interaction (HOI) detection is a challenging computer vision task.
We present a framework that enhances HOI detection by incorporating structured text knowledge.
arXiv Detail & Related papers (2023-07-25T14:20:52Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Interacting with Non-Cooperative User: A New Paradigm for Proactive
Dialogue Policy [83.61404191470126]
We propose a new solution named I-Pro that can learn Proactive policy in the Interactive setting.
Specifically, we learn the trade-off via a learned goal weight, which consists of four factors.
The experimental results demonstrate I-Pro significantly outperforms baselines in terms of effectiveness and interpretability.
arXiv Detail & Related papers (2022-04-07T14:11:31Z) - The Effects of Interactive AI Design on User Behavior: An Eye-tracking
Study of Fact-checking COVID-19 Claims [12.00747200817161]
We conducted a lab-based eye-tracking study to investigate how the interactivity of an AI-powered fact-checking system affects user interactions.
We found that the presence of interactively manipulating the AI system's prediction parameters affected users' dwell times, and eye-fixations on AOIs, but not mental workload.
arXiv Detail & Related papers (2022-02-17T21:08:57Z) - Mitigating Negative Side Effects via Environment Shaping [27.400267388362654]
Agents operating in unstructured environments often produce negative side effects (NSE)
We present an algorithm to solve this problem and analyze its theoretical properties.
Empirical evaluation of our approach shows that the proposed framework can successfully mitigate NSE, without affecting the agent's ability to complete its assigned task.
arXiv Detail & Related papers (2021-02-13T22:15:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.