Human-AI Collaboration: The Effect of AI Delegation on Human Task
Performance and Task Satisfaction
- URL: http://arxiv.org/abs/2303.09224v1
- Date: Thu, 16 Mar 2023 11:02:46 GMT
- Title: Human-AI Collaboration: The Effect of AI Delegation on Human Task
Performance and Task Satisfaction
- Authors: Patrick Hemmer, Monika Westphal, Max Schemmer, Sebastian Vetter,
Michael V\"ossing, Gerhard Satzger
- Abstract summary: We show that task performance and task satisfaction improve through AI delegation.
We identify humans' increased levels of self-efficacy as the underlying mechanism for these improvements.
Our findings provide initial evidence that allowing AI models to take over more management responsibilities can be an effective form of human-AI collaboration.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent work has proposed artificial intelligence (AI) models that can learn
to decide whether to make a prediction for an instance of a task or to delegate
it to a human by considering both parties' capabilities. In simulations with
synthetically generated or context-independent human predictions, delegation
can help improve the performance of human-AI teams -- compared to humans or the
AI model completing the task alone. However, so far, it remains unclear how
humans perform and how they perceive the task when they are aware that an AI
model delegated task instances to them. In an experimental study with 196
participants, we show that task performance and task satisfaction improve
through AI delegation, regardless of whether humans are aware of the
delegation. Additionally, we identify humans' increased levels of self-efficacy
as the underlying mechanism for these improvements in performance and
satisfaction. Our findings provide initial evidence that allowing AI models to
take over more management responsibilities can be an effective form of human-AI
collaboration in workplaces.
Related papers
- Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - Unexploited Information Value in Human-AI Collaboration [23.353778024330165]
How to improve performance of a human-AI team is often not clear without knowing what particular information and strategies each agent employs.
We propose a model based in statistically decision theory to analyze human-AI collaboration.
arXiv Detail & Related papers (2024-11-03T01:34:45Z) - Measuring Human Contribution in AI-Assisted Content Generation [68.03658922067487]
This study raises the research question of measuring human contribution in AI-assisted content generation.
By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - On the Utility of Accounting for Human Beliefs about AI Intention in Human-AI Collaboration [9.371527955300323]
We develop a model of human beliefs that captures how humans interpret and reason about their AI partner's intentions.
We create an AI agent that incorporates both human behavior and human beliefs when devising its strategy for interacting with humans.
arXiv Detail & Related papers (2024-06-10T06:39:37Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - On the Effect of Contextual Information on Human Delegation Behavior in
Human-AI collaboration [3.9253315480927964]
We study the effects of providing contextual information on human decisions to delegate instances to an AI.
We find that providing participants with contextual information significantly improves the human-AI team performance.
This research advances the understanding of human-AI interaction in human delegation and provides actionable insights for designing more effective collaborative systems.
arXiv Detail & Related papers (2024-01-09T18:59:47Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.