Weakly Supervised Learners for Correction of AI Errors with Provable
Performance Guarantees
- URL: http://arxiv.org/abs/2402.00899v3
- Date: Tue, 13 Feb 2024 15:53:06 GMT
- Title: Weakly Supervised Learners for Correction of AI Errors with Provable
Performance Guarantees
- Authors: Ivan Y. Tyukin, Tatiana Tyukina, Daniel van Helden, Zedong Zheng,
Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban,
Penelope Allison
- Abstract summary: We present a new methodology for handling AI errors by introducing weakly supervised AI error correctors with a priori performance guarantees.
These AI correctors are auxiliary maps whose role is to moderate the decisions of some previously constructed underlying classifier by either approving or rejecting its decisions.
A key technical focus of the work is in providing performance guarantees for these new AI correctors through bounds on the probabilities of incorrect decisions.
- Score: 38.36817319051697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a new methodology for handling AI errors by introducing weakly
supervised AI error correctors with a priori performance guarantees. These AI
correctors are auxiliary maps whose role is to moderate the decisions of some
previously constructed underlying classifier by either approving or rejecting
its decisions. The rejection of a decision can be used as a signal to suggest
abstaining from making a decision. A key technical focus of the work is in
providing performance guarantees for these new AI correctors through bounds on
the probabilities of incorrect decisions. These bounds are distribution
agnostic and do not rely on assumptions on the data dimension. Our empirical
example illustrates how the framework can be applied to improve the performance
of an image classifier in a challenging real-world task where training data are
scarce.
Related papers
- Rationale-Aware Answer Verification by Pairwise Self-Evaluation [11.763229353978321]
We show that training reliable verifiers requires ensuring the validity of rationales in addition to the correctness of the final answers.
Our results suggest that training reliable verifiers requires ensuring the validity of rationales in addition to the correctness of the final answers.
arXiv Detail & Related papers (2024-10-07T08:53:00Z) - Expectation Alignment: Handling Reward Misspecification in the Presence of Expectation Mismatch [19.03141646688652]
We use the theory of mind, i.e., the human user's beliefs about the AI agent, as a basis to develop a formal explanatory framework.
We propose a new interactive algorithm that uses the specified reward to infer potential user expectations.
arXiv Detail & Related papers (2024-04-12T19:43:37Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Calibrating AI Models for Wireless Communications via Conformal
Prediction [55.47458839587949]
Conformal prediction is applied for the first time to the design of AI for communication systems.
This paper investigates the application of conformal prediction as a general framework to obtain AI models that produce decisions with formal calibration guarantees.
arXiv Detail & Related papers (2022-12-15T12:52:23Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.