Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and
Communicating the Uncertainty of AI
- URL: http://arxiv.org/abs/2106.01410v2
- Date: Fri, 4 Jun 2021 01:08:35 GMT
- Title: Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and
Communicating the Uncertainty of AI
- Authors: Soumya Ghosh, Q. Vera Liao, Karthikeyan Natesan Ramamurthy, Jiri
Navratil, Prasanna Sattigeri, Kush R. Varshney, Yunfeng Zhang
- Abstract summary: We describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
The goal of this toolkit is twofold: first, to provide a broad range of capabilities to streamline as well as foster the common practices of quantifying, evaluating, improving, and communicating uncertainty in the AI application development lifecycle; second, to encourage further exploration of UQ's connections to other pillars of trustworthy AI.
- Score: 49.64037266892634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we describe an open source Python toolkit named Uncertainty
Quantification 360 (UQ360) for the uncertainty quantification of AI models. The
goal of this toolkit is twofold: first, to provide a broad range of
capabilities to streamline as well as foster the common practices of
quantifying, evaluating, improving, and communicating uncertainty in the AI
application development lifecycle; second, to encourage further exploration of
UQ's connections to other pillars of trustworthy AI such as fairness and
transparency through the dissemination of latest research and education
materials. Beyond the Python package (\url{https://github.com/IBM/UQ360}), we
have developed an interactive experience (\url{http://uq360.mybluemix.net}) and
guidance materials as educational tools to aid researchers and developers in
producing and communicating high-quality uncertainties in an effective manner.
Related papers
- Utilizing Background Knowledge for Robust Reasoning over Traffic
Situations [63.45021731775964]
We focus on a complementary research aspect of Intelligent Transportation: traffic understanding.
We scope our study to text-based methods and datasets given the abundant commonsense knowledge.
We adopt three knowledge-driven approaches for zero-shot QA over traffic situations.
arXiv Detail & Related papers (2022-12-04T09:17:24Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - A Survey on Uncertainty Toolkits for Deep Learning [3.113304966059062]
We present the first survey on toolkits for uncertainty estimation in deep learning (DL)
We investigate 11 toolkits with respect to modeling and evaluation capabilities.
While the first two provide a large degree of flexibility and seamless integration into their respective framework, the last one has the larger methodological scope.
arXiv Detail & Related papers (2022-05-02T17:23:06Z) - Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
Network Explanations and Beyond [8.938727411982399]
Quantus is a comprehensive, evaluation toolkit in Python that includes a collection of evaluation metrics and tutorials for evaluating explainable methods.
The toolkit has been thoroughly tested and is available under an open-source license on PyPi.
arXiv Detail & Related papers (2022-02-14T16:45:36Z) - Tools and Practices for Responsible AI Engineering [0.5249805590164901]
We present two new software libraries that address critical needs for responsible AI engineering.
hydra-zen dramatically simplifies the process of making complex AI applications, and their behaviors reproducible.
The rAI-toolbox is designed to enable methods for evaluating and enhancing the robustness of AI-models.
arXiv Detail & Related papers (2022-01-14T19:47:46Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z) - Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing,
and Improving Uncertainty Quantification [15.35099402481255]
Uncertainty Toolbox is a python library that helps to assess, visualize, and improve uncertainty quantification.
It additionally provides pedagogical resources, such as a glossary of key terms and an organized collection of key paper references.
arXiv Detail & Related papers (2021-09-21T15:32:06Z) - Fast Uncertainty Quantification for Deep Object Pose Estimation [91.09217713805337]
Deep learning-based object pose estimators are often unreliable and overconfident.
In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation.
arXiv Detail & Related papers (2020-11-16T06:51:55Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.