LogGENE: A smooth alternative to check loss for Deep Healthcare
Inference Tasks
- URL: http://arxiv.org/abs/2206.09333v3
- Date: Tue, 2 May 2023 17:28:16 GMT
- Title: LogGENE: A smooth alternative to check loss for Deep Healthcare
Inference Tasks
- Authors: Aryaman Jeendgar, Tanmay Devale, Soma S Dhavala, Snehanshu Saha
- Abstract summary: In our work, we develop methods for Deep neural networks based inferences in such datasets like the Gene Expression.
We adopt the Quantile Regression framework to predict full conditional quantiles for a given set of housekeeping gene expressions.
We propose log-cosh as a smooth-alternative to the check loss to drive the estimation process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mining large datasets and obtaining calibrated predictions from tem is of
immediate relevance and utility in reliable deep learning. In our work, we
develop methods for Deep neural networks based inferences in such datasets like
the Gene Expression. However, unlike typical Deep learning methods, our
inferential technique, while achieving state-of-the-art performance in terms of
accuracy, can also provide explanations, and report uncertainty estimates. We
adopt the Quantile Regression framework to predict full conditional quantiles
for a given set of housekeeping gene expressions. Conditional quantiles, in
addition to being useful in providing rich interpretations of the predictions,
are also robust to measurement noise. Our technique is particularly
consequential in High-throughput Genomics, an area which is ushering a new era
in personalized health care, and targeted drug design and delivery. However,
check loss, used in quantile regression to drive the estimation process is not
differentiable. We propose log-cosh as a smooth-alternative to the check loss.
We apply our methods on GEO microarray dataset. We also extend the method to
binary classification setting. Furthermore, we investigate other consequences
of the smoothness of the loss in faster convergence. We further apply the
classification framework to other healthcare inference tasks such as heart
disease, breast cancer, diabetes etc. As a test of generalization ability of
our framework, other non-healthcare related data sets for regression and
classification tasks are also evaluated.
Related papers
- Semi-Supervised Deep Regression with Uncertainty Consistency and
Variational Model Ensembling via Bayesian Neural Networks [31.67508478764597]
We propose a novel approach to semi-supervised regression, namely Uncertainty-Consistent Variational Model Ensembling (UCVME)
Our consistency loss significantly improves uncertainty estimates and allows higher quality pseudo-labels to be assigned greater importance under heteroscedastic regression.
Experiments show that our method outperforms state-of-the-art alternatives on different tasks and can be competitive with supervised methods that use full labels.
arXiv Detail & Related papers (2023-02-15T10:40:51Z) - IB-UQ: Information bottleneck based uncertainty quantification for
neural function regression and neural operator learning [11.5992081385106]
We propose a novel framework for uncertainty quantification via information bottleneck (IB-UQ) for scientific machine learning tasks.
We incorporate the bottleneck by a confidence-aware encoder, which encodes inputs into latent representations according to the confidence of the input data.
We also propose a data augmentation based information bottleneck objective which can enhance the quality of the extrapolation uncertainty.
arXiv Detail & Related papers (2023-02-07T05:56:42Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Deep Quantile Regression for Uncertainty Estimation in Unsupervised and
Supervised Lesion Detection [0.0]
Uncertainty is important in critical applications such as anomaly or lesion detection and clinical diagnosis.
In this work, we focus on using quantile regression to estimate aleatoric uncertainty and use it for estimating uncertainty in both supervised and unsupervised lesion detection problems.
We show how quantile regression can be used to characterize expert disagreement in the location of lesion boundaries.
arXiv Detail & Related papers (2021-09-20T08:50:21Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Estimation and Applications of Quantiles in Deep Binary Classification [0.0]
Quantile regression, based on check loss, is a widely used inferential paradigm in Statistics.
We consider the analogue of check loss in the binary classification setting.
We develop individualized confidence scores that can be used to decide whether a prediction is reliable.
arXiv Detail & Related papers (2021-02-09T07:07:42Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Approximate kNN Classification for Biomedical Data [1.1852406625172218]
Single-cell RNA-seq (scRNA-seq) is an emerging DNA sequencing technology with promising capabilities but significant computational challenges.
We propose the utilization of approximate nearest neighbor search algorithms for the task of kNN classification in scRNA-seq data.
arXiv Detail & Related papers (2020-12-03T18:30:43Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.