Who Should Predict? Exact Algorithms For Learning to Defer to Humans
- URL: http://arxiv.org/abs/2301.06197v2
- Date: Tue, 11 Apr 2023 07:40:40 GMT
- Title: Who Should Predict? Exact Algorithms For Learning to Defer to Humans
- Authors: Hussein Mozannar, Hunter Lang, Dennis Wei, Prasanna Sattigeri, Subhro
Das, David Sontag
- Abstract summary: We show that prior approaches can fail to find a human-AI system with low misclassification error.
We give a mixed-integer-linear-programming (MILP) formulation that can optimally solve the problem in the linear setting.
We provide a novel surrogate loss function that is realizable-consistent and performs well empirically.
- Score: 40.22768241509553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated AI classifiers should be able to defer the prediction to a human
decision maker to ensure more accurate predictions. In this work, we jointly
train a classifier with a rejector, which decides on each data point whether
the classifier or the human should predict. We show that prior approaches can
fail to find a human-AI system with low misclassification error even when there
exists a linear classifier and rejector that have zero error (the realizable
setting). We prove that obtaining a linear pair with low error is NP-hard even
when the problem is realizable. To complement this negative result, we give a
mixed-integer-linear-programming (MILP) formulation that can optimally solve
the problem in the linear setting. However, the MILP only scales to
moderately-sized problems. Therefore, we provide a novel surrogate loss
function that is realizable-consistent and performs well empirically. We test
our approaches on a comprehensive set of datasets and compare to a wide range
of baselines.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Don't guess what's true: choose what's optimal. A probability transducer
for machine-learning classifiers [0.0]
In medicine and drug discovery, the ultimate goal of a classification is not to guess a class, but to choose the optimal course of action among a set of possible ones.
The main idea of the present work is to calculate probabilities conditional not on the features, but on the trained classifier's output.
This calculation is cheap, needs to be made only once, and provides an output-to-probability "transducer" that can be applied to all future outputs.
arXiv Detail & Related papers (2023-02-21T10:14:13Z) - On Learning Mixture of Linear Regressions in the Non-Realizable Setting [44.307245411703704]
We show that mixture of linear regressions (MLR) can be used for prediction where instead of predicting a label, the model predicts a list of values.
In this paper we show that a version of the popular minimization (AM) algorithm finds the best fit lines in a dataset even when a realizable model is not assumed.
arXiv Detail & Related papers (2022-05-26T05:34:57Z) - Minimax rate of consistency for linear models with missing values [0.0]
Missing values arise in most real-world data sets due to the aggregation of multiple sources and intrinsically missing information (sensor failure, unanswered questions in surveys...).
In this paper, we focus on the extensively-studied linear models, but in presence of missing values, which turns out to be quite a challenging task.
This eventually requires to solve a number of learning tasks, exponential in the number of input features, which makes predictions impossible for current real-world datasets.
arXiv Detail & Related papers (2022-02-03T08:45:34Z) - High-dimensional separability for one- and few-shot learning [58.8599521537]
This work is driven by a practical question, corrections of Artificial Intelligence (AI) errors.
Special external devices, correctors, are developed. They should provide quick and non-iterative system fix without modification of a legacy AI system.
New multi-correctors of AI systems are presented and illustrated with examples of predicting errors and learning new classes of objects by a deep convolutional neural network.
arXiv Detail & Related papers (2021-06-28T14:58:14Z) - Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers [59.06169363181417]
Predict then Interpolate (PI) is an algorithm for learning correlations that are stable across environments.
We prove that by interpolating the distributions of the correct predictions and the wrong predictions, we can uncover an oracle distribution where the unstable correlation vanishes.
arXiv Detail & Related papers (2021-05-26T15:37:48Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Classification Under Human Assistance [29.220005688025378]
We show that supervised learning models trained to operate under different automation levels can outperform those trained for full automation as well as humans operating alone.
Experiments on synthetic and real-world data from several applications in medical diagnosis illustrate our theoretical findings.
arXiv Detail & Related papers (2020-06-21T16:52:37Z) - Algebraic Ground Truth Inference: Non-Parametric Estimation of Sample
Errors by AI Algorithms [0.0]
Non-parametric estimators of performance are an attractive solution in autonomous settings.
We show that the accuracy estimators in the experiments where we have ground truth are better than one part in a hundred.
The practical utility of the method is illustrated on a real-world dataset from an online advertising campaign and a sample of common classification benchmarks.
arXiv Detail & Related papers (2020-06-15T12:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.