Expert-aware uncertainty estimation for quality control of neural-based blood typing
- URL: http://arxiv.org/abs/2407.11181v1
- Date: Mon, 15 Jul 2024 19:07:02 GMT
- Title: Expert-aware uncertainty estimation for quality control of neural-based blood typing
- Authors: Ekaterina Zaychenkova, Dmitrii Iarchuk, Sergey Korchagin, Alexey Zaitsev, Egor Ershov,
- Abstract summary: In medical diagnostics, accurate uncertainty estimation for neural-based models is essential for complementing second-opinion systems.
A major difficulty here is the lack of labels on the hardness of examples, making the uncertainty estimation problem almost unsupervised.
Our novel approach integrates expert assessments of case complexity into the neural network's learning process, utilizing both definitive target labels and supplementary complexity ratings.
Experiments demonstrate enhancement of our approach in uncertainty prediction, achieving a 2.5-fold improvement with expert labels and a 35% increase in performance with estimates of neural-based expert consensus.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In medical diagnostics, accurate uncertainty estimation for neural-based models is essential for complementing second-opinion systems. Despite neural network ensembles' proficiency in this problem, a gap persists between actual uncertainties and predicted estimates. A major difficulty here is the lack of labels on the hardness of examples: a typical dataset includes only ground truth target labels, making the uncertainty estimation problem almost unsupervised. Our novel approach narrows this gap by integrating expert assessments of case complexity into the neural network's learning process, utilizing both definitive target labels and supplementary complexity ratings. We validate our methodology for blood typing, leveraging a new dataset "BloodyWell" unique in augmenting labeled reaction images with complexity scores from six medical specialists. Experiments demonstrate enhancement of our approach in uncertainty prediction, achieving a 2.5-fold improvement with expert labels and a 35% increase in performance with estimates of neural-based expert consensus.
Related papers
- Deep Evidential Learning for Radiotherapy Dose Prediction [0.0]
We present a novel application of an uncertainty-quantification framework called Deep Evidential Learning in the domain of radiotherapy dose prediction.
We found that this model can be effectively harnessed to yield uncertainty estimates that inherited correlations with prediction errors upon completion of network training.
arXiv Detail & Related papers (2024-04-26T02:43:45Z) - Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks [101.56637264703058]
We show that a variational Bayesian neural network approach can be used to improve uncertainty estimates.
We propose a new measure of uncertainty for contrastive learning, that is based on the disagreement in likelihood due to different positive samples.
arXiv Detail & Related papers (2023-11-30T22:32:24Z) - Leveraging Unlabelled Data in Multiple-Instance Learning Problems for
Improved Detection of Parkinsonian Tremor in Free-Living Conditions [80.88681952022479]
We introduce a new method for combining semi-supervised with multiple-instance learning.
We show that by leveraging the unlabelled data of 454 subjects we can achieve large performance gains in per-subject tremor detection.
arXiv Detail & Related papers (2023-04-29T12:25:10Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - BayesNetCNN: incorporating uncertainty in neural networks for
image-based classification tasks [0.29005223064604074]
We propose a method to convert a standard neural network into a Bayesian neural network.
We estimate the variability of predictions by sampling different networks similar to the original one at each forward pass.
We test our model in a large cohort of brain images from Alzheimer's Disease patients.
arXiv Detail & Related papers (2022-09-27T01:07:19Z) - Beyond Voxel Prediction Uncertainty: Identifying brain lesions you can
trust [1.1199585259018459]
Deep neural networks have become the gold-standard approach for the automated segmentation of 3D medical images.
In this work, we propose to go beyond voxel-wise assessment using an innovative Graph Neural Network approach.
This network allows the fusion of three estimators of voxel uncertainty: entropy, variance, and model's confidence.
arXiv Detail & Related papers (2022-09-22T09:20:05Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Deep Bayesian Gaussian Processes for Uncertainty Estimation in
Electronic Health Records [30.65770563934045]
We merge features of the deep Bayesian learning framework with deep kernel learning to leverage the strengths of both methods for more comprehensive uncertainty estimation.
We show that our method is less susceptible to making overconfident predictions, especially for the minority class in imbalanced datasets.
arXiv Detail & Related papers (2020-03-23T10:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.