From Interaction to Collaboration: How Hybrid Intelligence Enhances Chatbot Feedback
- URL: http://arxiv.org/abs/2504.13848v1
- Date: Sat, 08 Mar 2025 07:36:36 GMT
- Title: From Interaction to Collaboration: How Hybrid Intelligence Enhances Chatbot Feedback
- Authors: Janet Rafner, Ryan Q. Guloy, Eden W. Wen, Catherine M. Chiodo, Jacob Sherson,
- Abstract summary: This study explores the impact of two distinct narratives and feedback collection mechanisms on user engagement and feedback behavior.<n>Initial findings indicate that while small-scale survey measures allowed for no significant differences in user willingness to leave feedback, use the system, or trust the system, participants exposed to the HI narrative statistically significantly provided more detailed feedback.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative AI (GenAI) chatbots are becoming increasingly integrated into virtual assistant technologies, yet their success hinges on the ability to gather meaningful user feedback to improve interaction quality, system outcomes, and overall user acceptance. Successful chatbot interactions can enable organizations to build long-term relationships with their customers and users, supporting customer loyalty and furthering the organization's goals. This study explores the impact of two distinct narratives and feedback collection mechanisms on user engagement and feedback behavior: a standard AI-focused interaction versus a hybrid intelligence (HI) framed interaction. Initial findings indicate that while small-scale survey measures allowed for no significant differences in user willingness to leave feedback, use the system, or trust the system, participants exposed to the HI narrative statistically significantly provided more detailed feedback. These initial findings offer insights into designing effective feedback systems for GenAI virtual assistants, balancing user effort with system improvement potential.
Related papers
- Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User [117.82681846559909]
Conversational recommendation systems (CRSs) use multi-turn interaction to capture user preferences and provide personalized recommendations.<n>We propose a generative reward model based simulated user, named GRSU, for automatic interaction with CRSs.
arXiv Detail & Related papers (2025-04-29T06:37:30Z) - Understanding Learner-LLM Chatbot Interactions and the Impact of Prompting Guidelines [9.834055425277874]
This study investigates learner-AI interactions through an educational experiment in which participants receive structured guidance on effective prompting.<n>To assess user behavior and prompting efficacy, we analyze a dataset of 642 interactions from 107 users.<n>Our findings provide a deeper understanding of how users engage with Large Language Models and the role of structured prompting guidance in enhancing AI-assisted communication.
arXiv Detail & Related papers (2025-04-10T15:20:43Z) - Exploring the Impact of Personality Traits on Conversational Recommender Systems: A Simulation with Large Language Models [70.180385882195]
This paper introduces a personality-aware user simulation for Conversational Recommender Systems (CRSs)<n>The user agent induces customizable personality traits and preferences, while the system agent possesses the persuasion capability to simulate realistic interaction in CRSs.<n> Experimental results demonstrate that state-of-the-art LLMs can effectively generate diverse user responses aligned with specified personality traits.
arXiv Detail & Related papers (2025-04-09T13:21:17Z) - A Noise-Robust Turn-Taking System for Real-World Dialogue Robots: A Field Experiment [18.814181652728486]
We propose a noise-robust voice activity projection model to enhance real-time turn-taking in dialogue robots.<n>We conducted a field experiment in a shopping mall, comparing the VAP system with a conventional cloud-based speech recognition system.<n>The results showed that the proposed system significantly reduced response latency, leading to a more natural conversation.
arXiv Detail & Related papers (2025-03-08T14:53:20Z) - Unveiling the Impact of Multi-Modal Interactions on User Engagement: A Comprehensive Evaluation in AI-driven Conversations [17.409790984399052]
This paper explores the impact of multi-modal interactions, which incorporate images and audio alongside text, on user engagement.
Our findings reveal a significant enhancement in user engagement with multi-modal interactions compared to text-only dialogues.
Results suggest that multi-modal interactions optimize cognitive processing and facilitate richer information comprehension.
arXiv Detail & Related papers (2024-06-21T09:26:55Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - A Role-Selected Sharing Network for Joint Machine-Human Chatting Handoff
and Service Satisfaction Analysis [35.937850808046456]
We propose a novel model, Role-Selected Sharing Network ( RSSN), which integrates dialogue satisfaction estimation and handoff prediction in one multi-task learning framework.
Unlike prior efforts in dialog mining, by utilizing local user satisfaction as a bridge, global satisfaction detector and handoff predictor can effectively exchange critical information.
arXiv Detail & Related papers (2021-09-17T08:39:45Z) - Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning [69.42679922160684]
We propose feedback-weighted learning based on importance sampling to improve upon an initial supervised system using binary user feedback.
Our work opens the prospect to exploit interactions with real users and improve conversational systems after deployment.
arXiv Detail & Related papers (2020-11-01T19:50:34Z) - Soliciting Human-in-the-Loop User Feedback for Interactive Machine
Learning Reduces User Trust and Impressions of Model Accuracy [8.11839312231511]
Mixed-initiative systems allow users to interactively provide feedback to improve system performance.
Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy.
arXiv Detail & Related papers (2020-08-28T16:46:41Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.