Learning to Defer with Limited Expert Predictions
- URL: http://arxiv.org/abs/2304.07306v1
- Date: Fri, 14 Apr 2023 09:22:34 GMT
- Title: Learning to Defer with Limited Expert Predictions
- Authors: Patrick Hemmer, Lukas Thede, Michael V\"ossing, Johannes Jakubik,
Niklas K\"uhl
- Abstract summary: We propose a three-step approach to reduce the number of expert predictions required to train learning to defer algorithms.
Our experiments show that the approach allows the training of various learning to defer algorithms with a minimal number of human expert predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent research suggests that combining AI models with a human expert can
exceed the performance of either alone. The combination of their capabilities
is often realized by learning to defer algorithms that enable the AI to learn
to decide whether to make a prediction for a particular instance or defer it to
the human expert. However, to accurately learn which instances should be
deferred to the human expert, a large number of expert predictions that
accurately reflect the expert's capabilities are required -- in addition to the
ground truth labels needed to train the AI. This requirement shared by many
learning to defer algorithms hinders their adoption in scenarios where the
responsible expert regularly changes or where acquiring a sufficient number of
expert predictions is costly. In this paper, we propose a three-step approach
to reduce the number of expert predictions required to train learning to defer
algorithms. It encompasses (1) the training of an embedding model with ground
truth labels to generate feature representations that serve as a basis for (2)
the training of an expertise predictor model to approximate the expert's
capabilities. (3) The expertise predictor generates artificial expert
predictions for instances not yet labeled by the expert, which are required by
the learning to defer algorithms. We evaluate our approach on two public
datasets. One with "synthetically" generated human experts and another from the
medical domain containing real-world radiologists' predictions. Our experiments
show that the approach allows the training of various learning to defer
algorithms with a minimal number of human expert predictions. Furthermore, we
demonstrate that even a small number of expert predictions per class is
sufficient for these algorithms to exceed the performance the AI and the human
expert can achieve individually.
Related papers
- Defining Expertise: Applications to Treatment Effect Estimation [58.7977683502207]
We argue that expertise - particularly the type of expertise the decision-makers of a domain are likely to have - can be informative in designing and selecting methods for treatment effect estimation.
We define two types of expertise, predictive and prognostic, and demonstrate empirically that: (i) the prominent type of expertise in a domain significantly influences the performance of different methods in treatment effect estimation, and (ii) it is possible to predict the type of expertise present in a dataset.
arXiv Detail & Related papers (2024-03-01T17:30:49Z) - Auditing for Human Expertise [12.967730957018688]
We develop a statistical framework under which we can pose this question as a natural hypothesis test.
We propose a simple procedure which tests whether expert predictions are statistically independent from the outcomes of interest.
A rejection of our test thus suggests that human experts may add value to any algorithm trained on the available data.
arXiv Detail & Related papers (2023-06-02T16:15:24Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - Sample Efficient Learning of Predictors that Complement Humans [5.830619388189559]
We provide the first theoretical analysis of the benefit of learning complementary predictors in expert deferral.
We design active learning schemes that require minimal amount of data of human expert predictions.
arXiv Detail & Related papers (2022-07-19T23:19:25Z) - Forming Effective Human-AI Teams: Building Machine Learning Models that
Complement the Capabilities of Multiple Experts [0.0]
We propose an approach that trains a classification model to complement the capabilities of multiple human experts.
We evaluate our proposed approach in experiments on public datasets with "synthetic" experts and a real-world medical dataset annotated by multiple radiologists.
arXiv Detail & Related papers (2022-06-16T06:42:10Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.