Designing Algorithmic Recommendations to Achieve Human-AI Complementarity
- URL: http://arxiv.org/abs/2405.01484v2
- Date: Wed, 30 Oct 2024 03:56:34 GMT
- Title: Designing Algorithmic Recommendations to Achieve Human-AI Complementarity
- Authors: Bryce McLaughlin, Jann Spiess,
- Abstract summary: We formalize the design of recommendation algorithms that assist human decision-makers.
We use a potential-outcomes framework to model the effect of recommendations on a human decision-maker's binary treatment choice.
We derive minimax optimal recommendation algorithms that can be implemented with machine learning.
- Score: 2.4247752614854203
- License:
- Abstract: Algorithms frequently assist, rather than replace, human decision-makers. However, the design and analysis of algorithms often focus on predicting outcomes and do not explicitly model their effect on human decisions. This discrepancy between the design and role of algorithmic assistants becomes particularly concerning in light of empirical evidence that suggests that algorithmic assistants again and again fail to improve human decisions. In this article, we formalize the design of recommendation algorithms that assist human decision-makers without making restrictive ex-ante assumptions about how recommendations affect decisions. We formulate an algorithmic-design problem that leverages the potential-outcomes framework from causal inference to model the effect of recommendations on a human decision-maker's binary treatment choice. Within this model, we introduce a monotonicity assumption that leads to an intuitive classification of human responses to the algorithm. Under this assumption, we can express the human's response to algorithmic recommendations in terms of their compliance with the algorithm and the active decision they would take if the algorithm sends no recommendation. We showcase the utility of our framework using an online experiment that simulates a hiring task. We argue that our approach can make sense of the relative performance of different recommendation algorithms in the experiment and can help design solutions that realize human-AI complementarity. Finally, we leverage our approach to derive minimax optimal recommendation algorithms that can be implemented with machine learning using limited training data.
Related papers
- Integrating Expert Judgment and Algorithmic Decision Making: An Indistinguishability Framework [12.967730957018688]
We introduce a novel framework for human-AI collaboration in prediction and decision tasks.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to any feasible predictive algorithm.
arXiv Detail & Related papers (2024-10-11T13:03:53Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Human Expertise in Algorithmic Prediction [16.104330706951004]
We introduce a novel framework for incorporating human expertise into algorithmic predictions.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to predictive algorithms.
arXiv Detail & Related papers (2024-02-01T17:23:54Z) - Learning When to Advise Human Decision Makers [12.47847261193524]
We propose a novel design of AI systems in which the algorithm interacts with the human user in a two-sided manner.
The results of a large-scale experiment show that our advising approach manages to provide advice at times of need.
arXiv Detail & Related papers (2022-09-27T17:52:13Z) - Algorithmic Assistance with Recommendation-Dependent Preferences [2.864550757598007]
We consider the effect and design of algorithmic recommendations when they affect choices.
We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation.
arXiv Detail & Related papers (2022-08-16T09:24:47Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - Improving Human Sequential Decision-Making with Reinforcement Learning [29.334511328067777]
We design a novel machine learning algorithm that is capable of extracting "best practices" from trace data.
Our algorithm selects the tip that best bridges the gap between the actions taken by human workers and those taken by the optimal policy.
Experiments show that the tips generated by our algorithm can significantly improve human performance.
arXiv Detail & Related papers (2021-08-19T02:57:58Z) - An Overview and Experimental Study of Learning-based Optimization
Algorithms for Vehicle Routing Problem [49.04543375851723]
Vehicle routing problem (VRP) is a typical discrete optimization problem.
Many studies consider learning-based optimization algorithms to solve VRP.
This paper reviews recent advances in this field and divides relevant approaches into end-to-end approaches and step-by-step approaches.
arXiv Detail & Related papers (2021-07-15T02:13:03Z) - Decision-Making Algorithms for Learning and Adaptation with Application
to COVID-19 Data [46.71828464689144]
This work focuses on the development of a new family of decision-making algorithms for adaptation and learning.
A key observation is that estimation and decision problems are structurally different and, therefore, algorithms that have proven successful for the former need not perform well when adjusted for decision problems.
arXiv Detail & Related papers (2020-12-14T18:24:45Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.