On the Effect of Contextual Information on Human Delegation Behavior in
Human-AI collaboration
- URL: http://arxiv.org/abs/2401.04729v1
- Date: Tue, 9 Jan 2024 18:59:47 GMT
- Title: On the Effect of Contextual Information on Human Delegation Behavior in
Human-AI collaboration
- Authors: Philipp Spitzer and Joshua Holstein and Patrick Hemmer and Michael
V\"ossing and Niklas K\"uhl and Dominik Martin and Gerhard Satzger
- Abstract summary: We study the effects of providing contextual information on human decisions to delegate instances to an AI.
We find that providing participants with contextual information significantly improves the human-AI team performance.
This research advances the understanding of human-AI interaction in human delegation and provides actionable insights for designing more effective collaborative systems.
- Score: 3.9253315480927964
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The constantly increasing capabilities of artificial intelligence (AI) open
new possibilities for human-AI collaboration. One promising approach to
leverage existing complementary capabilities is allowing humans to delegate
individual instances to the AI. However, enabling humans to delegate instances
effectively requires them to assess both their own and the AI's capabilities in
the context of the given task. In this work, we explore the effects of
providing contextual information on human decisions to delegate instances to an
AI. We find that providing participants with contextual information
significantly improves the human-AI team performance. Additionally, we show
that the delegation behavior changes significantly when participants receive
varying types of contextual information. Overall, this research advances the
understanding of human-AI interaction in human delegation and provides
actionable insights for designing more effective collaborative systems.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - Unexploited Information Value in Human-AI Collaboration [23.353778024330165]
How to improve performance of a human-AI team is often not clear without knowing what particular information and strategies each agent employs.
We propose a model based in statistically decision theory to analyze human-AI collaboration.
arXiv Detail & Related papers (2024-11-03T01:34:45Z) - Measuring Human Contribution in AI-Assisted Content Generation [68.03658922067487]
This study raises the research question of measuring human contribution in AI-assisted content generation.
By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Improving Grounded Language Understanding in a Collaborative Environment
by Interacting with Agents Through Help Feedback [42.19685958922537]
We argue that human-AI collaboration should be interactive, with humans monitoring the work of AI agents and providing feedback that the agent can understand and utilize.
In this work, we explore these directions using the challenging task defined by the IGLU competition, an interactive grounded language understanding task in a MineCraft-like world.
arXiv Detail & Related papers (2023-04-21T05:37:59Z) - Human-AI Collaboration: The Effect of AI Delegation on Human Task
Performance and Task Satisfaction [0.0]
We show that task performance and task satisfaction improve through AI delegation.
We identify humans' increased levels of self-efficacy as the underlying mechanism for these improvements.
Our findings provide initial evidence that allowing AI models to take over more management responsibilities can be an effective form of human-AI collaboration.
arXiv Detail & Related papers (2023-03-16T11:02:46Z) - On the Effect of Information Asymmetry in Human-AI Teams [0.0]
We focus on the existence of complementarity potential between humans and AI.
Specifically, we identify information asymmetry as an essential source of complementarity potential.
By conducting an online experiment, we demonstrate that humans can use such contextual information to adjust the AI's decision.
arXiv Detail & Related papers (2022-05-03T13:02:50Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.