Classification Under Human Assistance
- URL: http://arxiv.org/abs/2006.11845v2
- Date: Mon, 15 Mar 2021 10:53:52 GMT
- Title: Classification Under Human Assistance
- Authors: Abir De, Nastaran Okati, Ali Zarezade, Manuel Gomez-Rodriguez
- Abstract summary: We show that supervised learning models trained to operate under different automation levels can outperform those trained for full automation as well as humans operating alone.
Experiments on synthetic and real-world data from several applications in medical diagnosis illustrate our theoretical findings.
- Score: 29.220005688025378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most supervised learning models are trained for full automation. However,
their predictions are sometimes worse than those by human experts on some
specific instances. Motivated by this empirical observation, our goal is to
design classifiers that are optimized to operate under different automation
levels. More specifically, we focus on convex margin-based classifiers and
first show that the problem is NP-hard. Then, we further show that, for support
vector machines, the corresponding objective function can be expressed as the
difference of two functions f = g - c, where g is monotone, non-negative and
{\gamma}-weakly submodular, and c is non-negative and modular. This
representation allows a recently introduced deterministic greedy algorithm, as
well as a more efficient randomized variant of the algorithm, to enjoy
approximation guarantees at solving the problem. Experiments on synthetic and
real-world data from several applications in medical diagnosis illustrate our
theoretical findings and demonstrate that, under human assistance, supervised
learning models trained to operate under different automation levels can
outperform those trained for full automation as well as humans operating alone.
Related papers
- Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection [66.16595174895802]
Existing AI-generated image (AIGI) detection methods often suffer from limited generalization performance.
In this paper, we identify a crucial yet previously overlooked asymmetry phenomenon in AIGI detection.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - A Multimodal Automated Interpretability Agent [63.8551718480664]
MAIA is a system that uses neural models to automate neural model understanding tasks.
We first characterize MAIA's ability to describe (neuron-level) features in learned representations of images.
We then show that MAIA can aid in two additional interpretability tasks: reducing sensitivity to spurious features, and automatically identifying inputs likely to be mis-classified.
arXiv Detail & Related papers (2024-04-22T17:55:11Z) - RF+clust for Leave-One-Problem-Out Performance Prediction [0.9281671380673306]
We study leave-one-problem-out (LOPO) performance prediction.
We analyze whether standard random forest (RF) model predictions can be improved by calibrating them with a weighted average of performance values.
arXiv Detail & Related papers (2023-01-23T16:14:59Z) - Who Should Predict? Exact Algorithms For Learning to Defer to Humans [40.22768241509553]
We show that prior approaches can fail to find a human-AI system with low misclassification error.
We give a mixed-integer-linear-programming (MILP) formulation that can optimally solve the problem in the linear setting.
We provide a novel surrogate loss function that is realizable-consistent and performs well empirically.
arXiv Detail & Related papers (2023-01-15T21:57:36Z) - Self-service Data Classification Using Interactive Visualization and
Interpretable Machine Learning [9.13755431537592]
Iterative Visual Logical (IVLC) is an interpretable machine learning algorithm.
IVLC is especially helpful when dealing with sensitive and crucial data like cancer data in the medical domain.
This chapter proposes an automated classification approach combined with new Coordinate Order (COO) algorithm and genetic algorithm.
arXiv Detail & Related papers (2021-07-11T05:39:14Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.