Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing,
and Improving Uncertainty Quantification
- URL: http://arxiv.org/abs/2109.10254v1
- Date: Tue, 21 Sep 2021 15:32:06 GMT
- Title: Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing,
and Improving Uncertainty Quantification
- Authors: Youngseog Chung, Ian Char, Han Guo, Jeff Schneider, Willie Neiswanger
- Abstract summary: Uncertainty Toolbox is a python library that helps to assess, visualize, and improve uncertainty quantification.
It additionally provides pedagogical resources, such as a glossary of key terms and an organized collection of key paper references.
- Score: 15.35099402481255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With increasing deployment of machine learning systems in various real-world
tasks, there is a greater need for accurate quantification of predictive
uncertainty. While the common goal in uncertainty quantification (UQ) in
machine learning is to approximate the true distribution of the target data,
many works in UQ tend to be disjoint in the evaluation metrics utilized, and
disparate implementations for each metric lead to numerical results that are
not directly comparable across different works. To address this, we introduce
Uncertainty Toolbox, an open-source python library that helps to assess,
visualize, and improve UQ. Uncertainty Toolbox additionally provides
pedagogical resources, such as a glossary of key terms and an organized
collection of key paper references. We hope that this toolbox is useful for
accelerating and uniting research efforts in uncertainty in machine learning.
Related papers
- Truthful Meta-Explanations for Local Interpretability of Machine
Learning Models [10.342433824178825]
We present a local meta-explanation technique which builds on top of the truthfulness metric, which is a faithfulness-based metric.
We demonstrate the effectiveness of both the technique and the metric by concretely defining all the concepts and through experimentation.
arXiv Detail & Related papers (2022-12-07T08:32:04Z) - Unexpectedly Useful: Convergence Bounds And Real-World Distributed
Learning [20.508003076947848]
Convergence bounds can predict and improve the performance of real-world distributed learning tasks.
Some quantities appearing in the bounds turn out to be very useful to identify the clients that are most likely to contribute to the learning process.
This suggests that further research is warranted on the ways -- often counter-intuitive -- in which convergence bounds can be exploited to improve the performance of real-world distributed learning tasks.
arXiv Detail & Related papers (2022-12-05T10:55:25Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - A Gentle Introduction to Conformal Prediction and Distribution-Free
Uncertainty Quantification [1.90365714903665]
This hands-on introduction is aimed at a reader interested in the practical implementation of distribution-free UQ.
We will include many explanatory illustrations, examples, and code samples in Python, with PyTorch syntax.
arXiv Detail & Related papers (2021-07-15T17:59:50Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and
Communicating the Uncertainty of AI [49.64037266892634]
We describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
The goal of this toolkit is twofold: first, to provide a broad range of capabilities to streamline as well as foster the common practices of quantifying, evaluating, improving, and communicating uncertainty in the AI application development lifecycle; second, to encourage further exploration of UQ's connections to other pillars of trustworthy AI.
arXiv Detail & Related papers (2021-06-02T18:29:04Z) - Scaling up Memory-Efficient Formal Verification Tools for Tree Ensembles [2.588973722689844]
We formalise and extend the VoTE algorithm presented earlier as a tool description.
We show how the separation of property checking from the core verification engine enables verification of versatile requirements.
We demonstrate the application of the tool in two case studies, namely digit recognition and aircraft collision avoidance.
arXiv Detail & Related papers (2021-05-06T11:50:22Z) - Distribution-Free, Risk-Controlling Prediction Sets [112.9186453405701]
We show how to generate set-valued predictions from a black-box predictor that control the expected loss on future test points at a user-specified level.
Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets.
arXiv Detail & Related papers (2021-01-07T18:59:33Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.