Uncertainty Quantification for Rule-Based Models
- URL: http://arxiv.org/abs/2211.01915v1
- Date: Thu, 3 Nov 2022 15:50:09 GMT
- Title: Uncertainty Quantification for Rule-Based Models
- Authors: Yusik Kim
- Abstract summary: Rule-based classification models directly predict values, rather than modeling a probability and translating it into a prediction as done in statistical models.
We propose an uncertainty quantification framework in the form of a meta-model that takes any binary classifier with binary output as a black box and estimates the prediction accuracy of that base model at a given input along with a level of confidence on that estimation.
- Score: 0.03807314298073299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rule-based classification models described in the language of logic directly
predict boolean values, rather than modeling a probability and translating it
into a prediction as done in statistical models. The vast majority of existing
uncertainty quantification approaches rely on models providing continuous
output not available to rule-based models. In this work, we propose an
uncertainty quantification framework in the form of a meta-model that takes any
binary classifier with binary output as a black box and estimates the
prediction accuracy of that base model at a given input along with a level of
confidence on that estimation. The confidence is based on how well that input
region is explored and is designed to work in any OOD scenario. We demonstrate
the usefulness of this uncertainty model by building an abstaining classifier
powered by it and observing its performance in various scenarios.
Related papers
- A Probabilistic Perspective on Unlearning and Alignment for Large Language Models [48.96686419141881]
We introduce the first formal probabilistic evaluation framework in Large Language Models (LLMs)
We derive novel metrics with high-probability guarantees concerning the output distribution of a model.
Our metrics are application-independent and allow practitioners to make more reliable estimates about model capabilities before deployment.
arXiv Detail & Related papers (2024-10-04T15:44:23Z) - Bounding-Box Inference for Error-Aware Model-Based Reinforcement Learning [4.185571779339683]
In model-based reinforcement learning, simulated experiences are often treated as equivalent to experience from the real environment.
We show that best results require distribution insensitive inference to estimate the uncertainty over model-based updates.
We find that bounding-box inference can reliably support effective selective planning.
arXiv Detail & Related papers (2024-06-23T04:23:15Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Uncertainty Quantification for Local Model Explanations Without Model
Access [0.44241702149260353]
We present a model-agnostic algorithm for generating post-hoc explanations for a machine learning model.
Our algorithm uses a bootstrapping approach to quantify the uncertainty that inevitably arises when generating explanations from a finite sample of model queries.
arXiv Detail & Related papers (2023-01-13T21:18:00Z) - Rigorous Assessment of Model Inference Accuracy using Language
Cardinality [5.584832154027001]
We develop a systematic approach that minimizes bias and uncertainty in model accuracy assessment by replacing statistical estimation with deterministic accuracy measures.
We experimentally demonstrate the consistency and applicability of our approach by assessing the accuracy of models inferred by state-of-the-art inference tools.
arXiv Detail & Related papers (2022-11-29T21:03:26Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Uncertainty estimation under model misspecification in neural network
regression [3.2622301272834524]
We study the effect of the model choice on uncertainty estimation.
We highlight that under model misspecification, aleatoric uncertainty is not properly captured.
arXiv Detail & Related papers (2021-11-23T10:18:41Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Estimating predictive uncertainty for rumour verification models [24.470032028639107]
We show that uncertainty estimates can be used to filter out model predictions likely to be erroneous.
We propose two methods for uncertainty-based instance rejection, supervised and unsupervised.
arXiv Detail & Related papers (2020-05-14T17:42:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.