Machine Learning with a Reject Option: A survey
- URL: http://arxiv.org/abs/2107.11277v3
- Date: Wed, 21 Feb 2024 10:10:40 GMT
- Title: Machine Learning with a Reject Option: A survey
- Authors: Kilian Hendrickx, Lorenzo Perini, Dries Van der Plas, Wannes Meert,
Jesse Davis
- Abstract summary: This survey aims to provide an overview on machine learning with rejection.
We introduce the conditions leading to two types of rejection, ambiguity and novelty rejection.
We review and categorize strategies to evaluate a model's predictive and rejective quality.
- Score: 18.43771007525432
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models always make a prediction, even when it is likely to
be inaccurate. This behavior should be avoided in many decision support
applications, where mistakes can have severe consequences. Albeit already
studied in 1970, machine learning with rejection recently gained interest. This
machine learning subfield enables machine learning models to abstain from
making a prediction when likely to make a mistake.
This survey aims to provide an overview on machine learning with rejection.
We introduce the conditions leading to two types of rejection, ambiguity and
novelty rejection, which we carefully formalize. Moreover, we review and
categorize strategies to evaluate a model's predictive and rejective quality.
Additionally, we define the existing architectures for models with rejection
and describe the standard techniques for learning such models. Finally, we
provide examples of relevant application domains and show how machine learning
with rejection relates to other machine learning research areas.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Learn to Unlearn: A Survey on Machine Unlearning [29.077334665555316]
This article presents a review of recent machine unlearning techniques, verification mechanisms, and potential attacks.
We highlight emerging challenges and prospective research directions.
We aim for this paper to provide valuable resources for integrating privacy, equity, andresilience into ML systems.
arXiv Detail & Related papers (2023-05-12T14:28:02Z) - Explaining Reject Options of Learning Vector Quantization Classifiers [6.125017875330933]
We propose to use counterfactual explanations for explaining rejects in machine learning models.
We investigate how to efficiently compute counterfactual explanations of different reject options for an important class of models.
arXiv Detail & Related papers (2022-02-15T08:16:10Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - A Note on High-Probability versus In-Expectation Guarantees of
Generalization Bounds in Machine Learning [95.48744259567837]
Statistical machine learning theory often tries to give generalization guarantees of machine learning models.
Statements made about the performance of machine learning models have to take the sampling process into account.
We show how one may transform one statement to another.
arXiv Detail & Related papers (2020-10-06T09:41:35Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - A Hierarchy of Limitations in Machine Learning [0.0]
This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society.
Modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them.
Consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning.
arXiv Detail & Related papers (2020-02-12T19:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.