Model Uncertainty Quantification for Reliable Deep Vision Structural
Health Monitoring
- URL: http://arxiv.org/abs/2004.05151v1
- Date: Fri, 10 Apr 2020 17:54:10 GMT
- Title: Model Uncertainty Quantification for Reliable Deep Vision Structural
Health Monitoring
- Authors: Seyed Omid Sajedi, Xiao Liang
- Abstract summary: This paper proposes Bayesian inference for deep vision structural health monitoring models.
Uncertainty can be quantified using the Monte Carlo dropout sampling.
Three independent case studies for cracks, local damage identification, and bridge component detection are investigated.
- Score: 2.5126058470073263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer vision leveraging deep learning has achieved significant success in
the last decade. Despite the promising performance of the existing deep models
in the recent literature, the extent of models' reliability remains unknown.
Structural health monitoring (SHM) is a crucial task for the safety and
sustainability of structures, and thus prediction mistakes can have fatal
outcomes. This paper proposes Bayesian inference for deep vision SHM models
where uncertainty can be quantified using the Monte Carlo dropout sampling.
Three independent case studies for cracks, local damage identification, and
bridge component detection are investigated using Bayesian inference. Aside
from better prediction results, mean class softmax variance and entropy, the
two uncertainty metrics, are shown to have good correlations with
misclassifications. While the uncertainty metrics can be used to trigger human
intervention and potentially improve prediction results, interpretation of
uncertainty masks can be challenging. Therefore, surrogate models are
introduced to take the uncertainty as input such that the performance can be
further boosted. The proposed methodology in this paper can be applied to
future deep vision SHM frameworks to incorporate model uncertainty in the
inspection processes.
Related papers
- Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - MonoProb: Self-Supervised Monocular Depth Estimation with Interpretable
Uncertainty [4.260312058817663]
Self-supervised monocular depth estimation methods aim to be used in critical applications such as autonomous vehicles for environment analysis.
We propose MonoProb, a new unsupervised monocular depth estimation method that returns an interpretable uncertainty.
Our experiments highlight enhancements achieved by our method on standard depth and uncertainty metrics.
arXiv Detail & Related papers (2023-11-10T15:55:14Z) - Discretization-Induced Dirichlet Posterior for Robust Uncertainty
Quantification on Regression [17.49026509916207]
Uncertainty quantification is critical for deploying deep neural networks (DNNs) in real-world applications.
For vision regression tasks, current AuxUE designs are mainly adopted for aleatoric uncertainty estimates.
We propose a generalized AuxUE scheme for more robust uncertainty quantification on regression tasks.
arXiv Detail & Related papers (2023-08-17T15:54:11Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - Integrating Uncertainty into Neural Network-based Speech Enhancement [27.868722093985006]
Supervised masking approaches in the time-frequency domain aim to employ deep neural networks to estimate a multiplicative mask to extract clean speech.
This leads to a single estimate for each input without any guarantees or measures of reliability.
We study the benefits of modeling uncertainty in clean speech estimation.
arXiv Detail & Related papers (2023-05-15T15:55:12Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Training, Architecture, and Prior for Deterministic Uncertainty Methods [33.45069308137142]
This work investigates important design choices in Deterministic Uncertainty Methods (DUMs)
We show that training schemes decoupling the core architecture and the uncertainty head schemes can significantly improve uncertainty performances.
Contrary to other Bayesian models, we show that the prior defined by DUMs do not have a strong effect on the final performances.
arXiv Detail & Related papers (2023-03-10T09:00:52Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.