Designing Algorithmic Recommendations to Achieve Human-AI Complementarity
- URL: http://arxiv.org/abs/2405.01484v1
- Date: Thu, 2 May 2024 17:15:30 GMT
- Title: Designing Algorithmic Recommendations to Achieve Human-AI Complementarity
- Authors: Bryce McLaughlin, Jann Spiess,
- Abstract summary: We formalize the design of recommendation algorithms that assist human decision-makers.
We show that our framework can help design solutions that realize human-AI complementarity.
- Score: 2.4247752614854203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithms frequently assist, rather than replace, human decision-makers. However, the design and analysis of algorithms often focus on predicting outcomes and do not explicitly model their effect on human decisions. This discrepancy between the design and role of algorithmic assistants becomes of particular concern in light of empirical evidence that suggests that algorithmic assistants again and again fail to improve human decisions. In this article, we formalize the design of recommendation algorithms that assist human decision-makers without making restrictive ex-ante assumptions about how recommendations affect decisions. We formulate an algorithmic-design problem that leverages the potential-outcomes framework from causal inference to model the effect of recommendations on a human decision-maker's binary treatment choice. Within this model, we introduce a monotonicity assumption that leads to an intuitive classification of human responses to the algorithm. Under this monotonicity assumption, we can express the human's response to algorithmic recommendations in terms of their compliance with the algorithm and the decision they would take if the algorithm sends no recommendation. We showcase the utility of our framework using an online experiment that simulates a hiring task. We argue that our approach explains the relative performance of different recommendation algorithms in the experiment, and can help design solutions that realize human-AI complementarity.
Related papers
- Human Expertise in Algorithmic Prediction [16.104330706951004]
We introduce a novel framework for incorporating human expertise into algorithmic predictions.
Our approach focuses on the use of human judgment to distinguish inputs which look the same' to any feasible predictive algorithm.
arXiv Detail & Related papers (2024-02-01T17:23:54Z) - Decision-aid or Controller? Steering Human Decision Makers with
Algorithms [5.449173263947196]
We study a decision-aid algorithm that learns about the human decision maker and provides ''personalized recommendations'' to influence final decisions.
We discuss the potential applications of such algorithms and their social implications.
arXiv Detail & Related papers (2023-03-23T23:24:26Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - Learning When to Advise Human Decision Makers [12.47847261193524]
We propose a novel design of AI systems in which the algorithm interacts with the human user in a two-sided manner.
The results of a large-scale experiment show that our advising approach manages to provide advice at times of need.
arXiv Detail & Related papers (2022-09-27T17:52:13Z) - Algorithmic Assistance with Recommendation-Dependent Preferences [2.864550757598007]
We consider the effect and design of algorithmic recommendations when they affect choices.
We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation.
arXiv Detail & Related papers (2022-08-16T09:24:47Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - The Statistical Complexity of Interactive Decision Making [126.04974881555094]
We provide a complexity measure, the Decision-Estimation Coefficient, that is proven to be both necessary and sufficient for sample-efficient interactive learning.
A unified algorithm design principle, Estimation-to-Decisions (E2D), transforms any algorithm for supervised estimation into an online algorithm for decision making.
arXiv Detail & Related papers (2021-12-27T02:53:44Z) - Decision-Making Algorithms for Learning and Adaptation with Application
to COVID-19 Data [46.71828464689144]
This work focuses on the development of a new family of decision-making algorithms for adaptation and learning.
A key observation is that estimation and decision problems are structurally different and, therefore, algorithms that have proven successful for the former need not perform well when adjusted for decision problems.
arXiv Detail & Related papers (2020-12-14T18:24:45Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.