Interpretable Uncertainty Quantification in AI for HEP
- URL: http://arxiv.org/abs/2208.03284v2
- Date: Mon, 8 Aug 2022 20:46:34 GMT
- Title: Interpretable Uncertainty Quantification in AI for HEP
- Authors: Thomas Y. Chen, Biprateep Dey, Aishik Ghosh, Michael Kagan, Brian
Nord, Nesar Ramachandra
- Abstract summary: Estimating uncertainty is at the core of performing scientific measurements in HEP.
The goal of uncertainty quantification (UQ) is inextricably linked to the question, "how do we physically and statistically interpret these uncertainties?"
For artificial intelligence (AI) applications in HEP, there are several areas where interpretable methods for UQ are essential.
- Score: 2.922388615593672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating uncertainty is at the core of performing scientific measurements
in HEP: a measurement is not useful without an estimate of its uncertainty. The
goal of uncertainty quantification (UQ) is inextricably linked to the question,
"how do we physically and statistically interpret these uncertainties?" The
answer to this question depends not only on the computational task we aim to
undertake, but also on the methods we use for that task. For artificial
intelligence (AI) applications in HEP, there are several areas where
interpretable methods for UQ are essential, including inference, simulation,
and control/decision-making. There exist some methods for each of these areas,
but they have not yet been demonstrated to be as trustworthy as more
traditional approaches currently employed in physics (e.g., non-AI frequentist
and Bayesian methods).
Shedding light on the questions above requires additional understanding of
the interplay of AI systems and uncertainty quantification. We briefly discuss
the existing methods in each area and relate them to tasks across HEP. We then
discuss recommendations for avenues to pursue to develop the necessary
techniques for reliable widespread usage of AI with UQ over the next decade.
Related papers
- Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness [106.52630978891054]
We present a taxonomy of uncertainty specific to vision-language AI systems.
We also introduce a new metric confidence-weighted accuracy, that is well correlated with both accuracy and calibration error.
arXiv Detail & Related papers (2024-07-02T04:23:54Z) - Word-Sequence Entropy: Towards Uncertainty Estimation in Free-Form
Medical Question Answering Applications and Beyond [63.969531254692725]
Uncertainty estimation plays a pivotal role in ensuring the reliability of safety-critical human-AI interaction systems.
We propose the Word-Sequence Entropy (WSE), which calibrates the uncertainty proportion at both the word and sequence levels according to semantic relevance.
We show that WSE exhibits superior performance on accurate uncertainty measurement under two standard criteria for correctness evaluation.
arXiv Detail & Related papers (2024-02-22T03:46:08Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications [56.130945359053776]
We provide a comprehensive review of uncertainty-relevant works in the NLP field.
We first categorize the sources of uncertainty in natural language into three types, including input, system, and output.
We discuss the challenges of uncertainty estimation in NLP and discuss potential future directions.
arXiv Detail & Related papers (2023-06-05T06:46:53Z) - A Survey on Uncertainty Quantification Methods for Deep Learning [7.102893202197349]
Uncertainty quantification (UQ) aims to estimate the confidence of DNN predictions beyond prediction accuracy.
This paper presents a systematic taxonomy of UQ methods for DNNs based on the types of uncertainty sources.
We show how our taxonomy of UQ methodologies can potentially help guide the choice of UQ method in different machine learning problems.
arXiv Detail & Related papers (2023-02-26T22:30:08Z) - Conformal Methods for Quantifying Uncertainty in Spatiotemporal Data: A
Survey [0.0]
In high-risk settings, it is important that a model produces uncertainty to reflect its own confidence and avoid failures.
In this paper we survey recent works on uncertainty (UQ) for deep learning, in particular distribution-free Conformal Prediction method for its mathematical and wide applicability.
arXiv Detail & Related papers (2022-09-08T06:08:48Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - A Review of Uncertainty Quantification in Deep Learning: Techniques,
Applications and Challenges [76.20963684020145]
Uncertainty quantification (UQ) plays a pivotal role in reduction of uncertainties during both optimization and decision making processes.
Bizarre approximation and ensemble learning techniques are two most widely-used UQ methods in the literature.
This study reviews recent advances in UQ methods used in deep learning and investigates the application of these methods in reinforcement learning.
arXiv Detail & Related papers (2020-11-12T06:41:05Z) - A Comparison of Uncertainty Estimation Approaches in Deep Learning
Components for Autonomous Vehicle Applications [0.0]
Key factor for ensuring safety in Autonomous Vehicles (AVs) is to avoid any abnormal behaviors under undesirable and unpredicted circumstances.
Different methods for uncertainty quantification have recently been proposed to measure the inevitable source of errors in data and models.
These methods require a higher computational load, a higher memory footprint, and introduce extra latency, which can be prohibitive in safety-critical applications.
arXiv Detail & Related papers (2020-06-26T18:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.