Uncertainty Decomposition and Error Margin Detection of Homodyned-K Distribution in Quantitative Ultrasound
- URL: http://arxiv.org/abs/2409.11583v1
- Date: Tue, 17 Sep 2024 22:16:49 GMT
- Title: Uncertainty Decomposition and Error Margin Detection of Homodyned-K Distribution in Quantitative Ultrasound
- Authors: Dorsa Ameri, Ali K. Z. Tehrani, Ivan M. Rosado-Mendez, Hassan Rivaz,
- Abstract summary: Homodyned K-distribution (HK-distribution) parameter estimation in quantitative ultrasound (QUS) has been recently addressed using Bayesian Neural Networks (BNNs)
BNNs have been shown to significantly reduce computational time in speckle statistics-based QUS without compromising accuracy and precision.
- Score: 1.912429179274357
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Homodyned K-distribution (HK-distribution) parameter estimation in quantitative ultrasound (QUS) has been recently addressed using Bayesian Neural Networks (BNNs). BNNs have been shown to significantly reduce computational time in speckle statistics-based QUS without compromising accuracy and precision. Additionally, they provide estimates of feature uncertainty, which can guide the clinician's trust in the reported feature value. The total predictive uncertainty in Bayesian modeling can be decomposed into epistemic (uncertainty over the model parameters) and aleatoric (uncertainty inherent in the data) components. By decomposing the predictive uncertainty, we can gain insights into the factors contributing to the total uncertainty. In this study, we propose a method to compute epistemic and aleatoric uncertainties for HK-distribution parameters ($\alpha$ and $k$) estimated by a BNN, in both simulation and experimental data. In addition, we investigate the relationship between the prediction error and both uncertainties, shedding light on the interplay between these uncertainties and HK parameters errors.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Ensemble Neural Networks for Remaining Useful Life (RUL) Prediction [0.39287497907611874]
A core part of maintenance planning is a monitoring system that provides a good prognosis on health and degradation.
Here, we propose ensemble neural networks for probabilistic RUL predictions which considers both uncertainties and decouples these two uncertainties.
This method is tested on NASA's turbofan jet engine CMAPSS data-set.
arXiv Detail & Related papers (2023-09-21T19:38:44Z) - Looking at the posterior: accuracy and uncertainty of neural-network
predictions [0.0]
We show that prediction accuracy depends on both epistemic and aleatoric uncertainty.
We introduce a novel acquisition function that outperforms common uncertainty-based methods.
arXiv Detail & Related papers (2022-11-26T16:13:32Z) - Homodyned K-distribution: parameter estimation and uncertainty
quantification using Bayesian neural networks [2.599882743586164]
The parameters of Homodyned K-distribution (HK-distribution) are the speckle statistics that can model the envelope data in diverse scattering conditions.
We propose a Bayesian Neural Network (BNN) to estimate the parameters of HK-distribution and quantify the uncertainty of the estimator.
arXiv Detail & Related papers (2022-10-31T22:38:33Z) - A General Framework for quantifying Aleatoric and Epistemic uncertainty
in Graph Neural Networks [0.29494468099506893]
Graph Neural Networks (GNN) provide a powerful framework that elegantly integrates Graph theory with Machine learning.
We consider the problem of quantifying the uncertainty in predictions of GNN stemming from modeling errors and measurement uncertainty.
We propose a unified approach to treat both sources of uncertainty in a Bayesian framework.
arXiv Detail & Related papers (2022-05-20T05:25:40Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv Detail & Related papers (2021-06-07T18:31:47Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.