Integrating Expert Judgment and Algorithmic Decision Making: An Indistinguishability Framework
- URL: http://arxiv.org/abs/2410.08783v2
- Date: Thu, 17 Oct 2024 21:13:08 GMT
- Title: Integrating Expert Judgment and Algorithmic Decision Making: An Indistinguishability Framework
- Authors: Rohan Alur, Loren Laine, Darrick K. Li, Dennis Shung, Manish Raghavan, Devavrat Shah,
- Abstract summary: We introduce a novel framework for human-AI collaboration in prediction and decision tasks.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to any feasible predictive algorithm.
- Score: 12.967730957018688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel framework for human-AI collaboration in prediction and decision tasks. Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to any feasible predictive algorithm. We argue that this framing clarifies the problem of human-AI collaboration in prediction and decision tasks, as experts often form judgments by drawing on information which is not encoded in an algorithm's training data. Algorithmic indistinguishability yields a natural test for assessing whether experts incorporate this kind of "side information", and further provides a simple but principled method for selectively incorporating human feedback into algorithmic predictions. We show that this method provably improves the performance of any feasible algorithmic predictor and precisely quantify this improvement. We demonstrate the utility of our framework in a case study of emergency room triage decisions, where we find that although algorithmic risk scores are highly competitive with physicians, there is strong evidence that physician judgments provide signal which could not be replicated by any predictive algorithm. This insight yields a range of natural decision rules which leverage the complementary strengths of human experts and predictive algorithms.
Related papers
- Designing Algorithmic Recommendations to Achieve Human-AI Complementarity [2.4247752614854203]
We formalize the design of recommendation algorithms that assist human decision-makers.
We use a potential-outcomes framework to model the effect of recommendations on a human decision-maker's binary treatment choice.
We derive minimax optimal recommendation algorithms that can be implemented with machine learning.
arXiv Detail & Related papers (2024-05-02T17:15:30Z) - Human Expertise in Algorithmic Prediction [16.104330706951004]
We introduce a novel framework for incorporating human expertise into algorithmic predictions.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to predictive algorithms.
arXiv Detail & Related papers (2024-02-01T17:23:54Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Adversarial Training Should Be Cast as a Non-Zero-Sum Game [121.95628660889628]
Two-player zero-sum paradigm of adversarial training has not engendered sufficient levels of robustness.
We show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on robustness.
A novel non-zero-sum bilevel formulation of adversarial training yields a framework that matches and in some cases outperforms state-of-the-art attacks.
arXiv Detail & Related papers (2023-06-19T16:00:48Z) - Auditing for Human Expertise [12.967730957018688]
We develop a statistical framework under which we can pose this question as a natural hypothesis test.
We propose a simple procedure which tests whether expert predictions are statistically independent from the outcomes of interest.
A rejection of our test thus suggests that human experts may add value to any algorithm trained on the available data.
arXiv Detail & Related papers (2023-06-02T16:15:24Z) - Algorithmic Decision-Making Safeguarded by Human Knowledge [8.482569811904028]
We study the augmentation of algorithmic decisions with human knowledge.
We show that when the algorithmic decision is optimal with large data, the non-data-driven human guardrail usually provides no benefit.
In these cases, even with sufficient data, the augmentation from human knowledge can still improve the performance of the algorithmic decision.
arXiv Detail & Related papers (2022-11-20T17:13:32Z) - Algorithmic Assistance with Recommendation-Dependent Preferences [2.864550757598007]
We consider the effect and design of algorithmic recommendations when they affect choices.
We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation.
arXiv Detail & Related papers (2022-08-16T09:24:47Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.