Faster Deep Ensemble Averaging for Quantification of DNA Damage from
Comet Assay Images With Uncertainty Estimates
- URL: http://arxiv.org/abs/2112.12839v1
- Date: Thu, 23 Dec 2021 20:48:28 GMT
- Title: Faster Deep Ensemble Averaging for Quantification of DNA Damage from
Comet Assay Images With Uncertainty Estimates
- Authors: Srikanth Namuduri, Prateek Mehta, Lise Barbe, Stephanie Lam, Zohreh
Faghihmonzavi, Steve Finkbeiner, Shekhar Bhansali
- Abstract summary: We present an approach to quantify the extent of DNA damage that combines deep learning with a rigorous and comprehensive method to optimize the hyper- parameters.
We applied our approach to a comet assay dataset with more than 1300 images and achieved an $R2$ of 0.84, where the output included the confidence interval for each prediction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Several neurodegenerative diseases involve the accumulation of cellular DNA
damage. Comet assays are a popular way of estimating the extent of DNA damage.
Current literature on the use of deep learning to quantify DNA damage presents
an empirical approach to hyper-parameter optimization and does not include
uncertainty estimates. Deep ensemble averaging is a standard approach to
estimating uncertainty but it requires several iterations of network training,
which makes it time-consuming. Here we present an approach to quantify the
extent of DNA damage that combines deep learning with a rigorous and
comprehensive method to optimize the hyper-parameters with the help of
statistical tests. We also use an architecture that allows for a faster
computation of deep ensemble averaging and performs statistical tests
applicable to networks using transfer learning. We applied our approach to a
comet assay dataset with more than 1300 images and achieved an $R^2$ of 0.84,
where the output included the confidence interval for each prediction. The
proposed architecture is an improvement over the current approaches since it
speeds up the uncertainty estimation by 30X while being statistically more
rigorous.
Related papers
- Bayesian optimized deep ensemble for uncertainty quantification of deep neural networks: a system safety case study on sodium fast reactor thermal stratification modeling [10.055838489452817]
Deep ensembles (DEs) are efficient and scalable methods for uncertainty quantification (UQ) in Deep Neural Networks (DNNs)
We propose a novel method that combines Bayesian optimization (BO) with DE, referred to as BODE, to enhance both predictive accuracy and UQ.
We apply BODE to a case study involving a Densely connected Convolutional Neural Network (DCNN) trained on computational fluid dynamics (CFD) data to predict eddy viscosity in sodium fast reactor thermal stratification modeling.
arXiv Detail & Related papers (2024-12-11T21:06:50Z) - Deep Ensembles Meets Quantile Regression: Uncertainty-aware Imputation for Time Series [45.76310830281876]
We propose Quantile Sub-Ensembles, a novel method to estimate uncertainty with ensemble of quantile-regression-based task networks.
Our method not only produces accurate imputations that is robust to high missing rates, but also is computationally efficient due to the fast training of its non-generative model.
arXiv Detail & Related papers (2023-12-03T05:52:30Z) - Can input reconstruction be used to directly estimate uncertainty of a
regression U-Net model? -- Application to proton therapy dose prediction for
head and neck cancer patients [0.8343441027226364]
We present an alternative direct uncertainty estimation method and apply it for a regression U-Net architecture.
For the proof-of-concept, our method is applied to proton therapy dose prediction in head and neck cancer patients.
arXiv Detail & Related papers (2023-10-30T16:04:34Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Robust Learning via Persistency of Excitation [4.674053902991301]
We show that network training using gradient descent is equivalent to a dynamical system parameter estimation problem.
We provide an efficient technique for estimating the corresponding Lipschitz constant using extreme value theory.
Our approach also universally increases the adversarial accuracy by 0.1% to 0.3% points in various state-of-the-art adversarially trained models.
arXiv Detail & Related papers (2021-06-03T18:49:05Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Variable Skipping for Autoregressive Range Density Estimation [84.60428050170687]
We show a technique, variable skipping, for accelerating range density estimation over deep autoregressive models.
We show that variable skipping provides 10-100$times$ efficiency improvements when targeting challenging high-quantile error metrics.
arXiv Detail & Related papers (2020-07-10T19:01:40Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.