Complementarity in Human-AI Collaboration: Concept, Sources, and Evidence
- URL: http://arxiv.org/abs/2404.00029v2
- Date: Mon, 25 Nov 2024 22:04:11 GMT
- Title: Complementarity in Human-AI Collaboration: Concept, Sources, and Evidence
- Authors: Patrick Hemmer, Max Schemmer, Niklas Kühl, Michael Vössing, Gerhard Satzger,
- Abstract summary: We develop a concept of complementarity and formalize its theoretical potential.
We identify information and capability asymmetry as the two key sources of complementarity.
Our work provides researchers with a comprehensive theoretical foundation of human-AI complementarity in decision-making.
- Score: 6.571063542099526
- License:
- Abstract: Artificial intelligence (AI) has the potential to significantly enhance human performance across various domains. Ideally, collaboration between humans and AI should result in complementary team performance (CTP) -- a level of performance that neither of them can attain individually. So far, however, CTP has rarely been observed, suggesting an insufficient understanding of the principle and the application of complementarity. Therefore, we develop a general concept of complementarity and formalize its theoretical potential as well as the actual realized effect in decision-making situations. Moreover, we identify information and capability asymmetry as the two key sources of complementarity. Finally, we illustrate the impact of each source on complementarity potential and effect in two empirical studies. Our work provides researchers with a comprehensive theoretical foundation of human-AI complementarity in decision-making and demonstrates that leveraging these sources constitutes a viable pathway towards designing effective human-AI collaboration, i.e., the realization of CTP.
Related papers
- Problem Solving Through Human-AI Preference-Based Cooperation [74.39233146428492]
We propose HAI-Co2, a novel human-AI co-construction framework.
We formalize HAI-Co2 and discuss the difficult open research problems that it faces.
We present a case study of HAI-Co2 and demonstrate its efficacy compared to monolithic generative AI models.
arXiv Detail & Related papers (2024-08-14T11:06:57Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - Confounding-Robust Policy Improvement with Human-AI Teams [9.823906892919746]
We propose a novel solution to address unobserved confounding in human-AI collaboration by employing the marginal sensitivity model (MSM)
Our approach combines domain expertise with AI-driven statistical modeling to account for potential confounders that may otherwise remain hidden.
arXiv Detail & Related papers (2023-10-13T02:39:52Z) - Towards Effective Human-AI Decision-Making: The Role of Human Learning
in Appropriate Reliance on AI Advice [3.595471754135419]
We show the relationship between learning and appropriate reliance in an experiment with 100 participants.
This work provides fundamental concepts for analyzing reliance and derives implications for the effective design of human-AI decision-making.
arXiv Detail & Related papers (2023-10-03T14:51:53Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Learning Complementary Policies for Human-AI Teams [22.13683008398939]
We propose a framework for a novel human-AI collaboration for selecting advantageous course of action.
Our solution aims to exploit the human-AI complementarity to maximize decision rewards.
arXiv Detail & Related papers (2023-02-06T17:22:18Z) - On the Effect of Information Asymmetry in Human-AI Teams [0.0]
We focus on the existence of complementarity potential between humans and AI.
Specifically, we identify information asymmetry as an essential source of complementarity potential.
By conducting an online experiment, we demonstrate that humans can use such contextual information to adjust the AI's decision.
arXiv Detail & Related papers (2022-05-03T13:02:50Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - Understanding the Effect of Out-of-distribution Examples and Interactive
Explanations on Human-AI Decision Making [19.157591744997355]
We argue that the typical experimental setup limits the potential of human-AI teams.
We develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance.
arXiv Detail & Related papers (2021-01-13T19:01:32Z) - Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
Decision-making [46.625616262738404]
We use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting.
We focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration.
arXiv Detail & Related papers (2020-10-15T22:25:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.