Human Expertise in Algorithmic Prediction
- URL: http://arxiv.org/abs/2402.00793v2
- Date: Wed, 22 May 2024 15:40:23 GMT
- Title: Human Expertise in Algorithmic Prediction
- Authors: Rohan Alur, Manish Raghavan, Devavrat Shah,
- Abstract summary: We introduce a novel framework for incorporating human expertise into algorithmic predictions.
Our approach focuses on the use of human judgment to distinguish inputs which look the same' to any feasible predictive algorithm.
- Score: 16.104330706951004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel framework for incorporating human expertise into algorithmic predictions. Our approach focuses on the use of human judgment to distinguish inputs which `look the same' to any feasible predictive algorithm. We argue that this framing clarifies the problem of human/AI collaboration in prediction tasks, as experts often have access to information -- particularly subjective information -- which is not encoded in the algorithm's training data. We use this insight to develop a set of principled algorithms for selectively incorporating human feedback only when it improves the performance of any feasible predictor. We find empirically that although algorithms often outperform their human counterparts on average, human judgment can significantly improve algorithmic predictions on specific instances (which can be identified ex-ante). In an X-ray classification task, we find that this subset constitutes nearly 30% of the patient population. Our approach provides a natural way of uncovering this heterogeneity and thus enabling effective human-AI collaboration.
Related papers
- Integrating Expert Judgment and Algorithmic Decision Making: An Indistinguishability Framework [12.967730957018688]
We introduce a novel framework for human-AI collaboration in prediction and decision tasks.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to any feasible predictive algorithm.
arXiv Detail & Related papers (2024-10-11T13:03:53Z) - Collaborative Intelligence in Sequential Experiments: A Human-in-the-Loop Framework for Drug Discovery [13.438499600701578]
We introduce a human-in-the-loop framework for sequential experiments in drug discovery.
The proposed algorithm processes experimental data to recommend both promising molecules and those that could improve its performance to human experts.
Human experts retain the final decision-making authority based on these recommendations and their domain expertise.
arXiv Detail & Related papers (2024-05-07T02:03:07Z) - Designing Algorithmic Recommendations to Achieve Human-AI Complementarity [2.4247752614854203]
We formalize the design of recommendation algorithms that assist human decision-makers.
We use a potential-outcomes framework to model the effect of recommendations on a human decision-maker's binary treatment choice.
We derive minimax optimal recommendation algorithms that can be implemented with machine learning.
arXiv Detail & Related papers (2024-05-02T17:15:30Z) - Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - Auditing for Human Expertise [13.740812888680614]
We develop a statistical framework under which we can pose this question as a natural hypothesis test.
We propose a simple procedure which tests whether expert predictions are statistically independent from the outcomes of interest.
A rejection of our test thus suggests that human experts may add value to any algorithm trained on the available data.
arXiv Detail & Related papers (2023-06-02T16:15:24Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - Voting of predictive models for clinical outcomes: consensus of
algorithms for the early prediction of sepsis from clinical data and an
analysis of the PhysioNet/Computing in Cardiology Challenge 2019 [2.0559497209595823]
We consider the problem of constructing an ensemble algorithm from 70 individual algorithms for the early prediction of sepsis from clinical data.
We find that this ensemble algorithm outperforms separate algorithms, especially on a hidden test set.
arXiv Detail & Related papers (2020-12-20T20:12:49Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.