Evaluating Point-Prediction Uncertainties in Neural Networks for Drug
Discovery
- URL: http://arxiv.org/abs/2210.17043v1
- Date: Mon, 31 Oct 2022 03:45:11 GMT
- Title: Evaluating Point-Prediction Uncertainties in Neural Networks for Drug
Discovery
- Authors: Ya Ju Fan, Jonathan E. Allen, Kevin S. McLoughlin, Da Shi, Brian J.
Bennion, Xiaohua Zhang, and Felice C. Lightstone
- Abstract summary: Neural Network (NN) models provide potential to speed up the drug discovery process and reduce its failure rates.
The success of NN models require uncertainty quantification (UQ) as drug discovery explores chemical space beyond the training data distribution.
In this paper, we examine UQ methods that estimate different sources of predictive uncertainty for NN models aiming at drug discovery.
- Score: 0.26385121748044166
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural Network (NN) models provide potential to speed up the drug discovery
process and reduce its failure rates. The success of NN models require
uncertainty quantification (UQ) as drug discovery explores chemical space
beyond the training data distribution. Standard NN models do not provide
uncertainty information. Methods that combine Bayesian models with NN models
address this issue, but are difficult to implement and more expensive to train.
Some methods require changing the NN architecture or training procedure,
limiting the selection of NN models. Moreover, predictive uncertainty can come
from different sources. It is important to have the ability to separately model
different types of predictive uncertainty, as the model can take assorted
actions depending on the source of uncertainty. In this paper, we examine UQ
methods that estimate different sources of predictive uncertainty for NN models
aiming at drug discovery. We use our prior knowledge on chemical compounds to
design the experiments. By utilizing a visualization method we create
non-overlapping and chemically diverse partitions from a collection of chemical
compounds. These partitions are used as training and test set splits to explore
NN model uncertainty. We demonstrate how the uncertainties estimated by the
selected methods describe different sources of uncertainty under different
partitions and featurization schemes and the relationship to prediction error.
Related papers
- Uncertainty Measurement of Deep Learning System based on the Convex Hull of Training Sets [0.13265175299265505]
We propose To-hull Uncertainty and Closure Ratio, which measures an uncertainty of trained model based on the convex hull of training data.
It can observe the positional relation between the convex hull of the learned data and an unseen sample and infer how extrapolate the sample is from the convex hull.
arXiv Detail & Related papers (2024-05-25T06:25:24Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Single-model uncertainty quantification in neural network potentials
does not consistently outperform model ensembles [0.7499722271664145]
Neural networks (NNs) often assign high confidence to their predictions, even for points far out-of-distribution.
Uncertainty quantification (UQ) is a challenge when they are employed to model interatomic potentials in materials systems.
Differentiable UQ techniques can find new informative data and drive active learning loops for robust potentials.
arXiv Detail & Related papers (2023-05-02T19:41:17Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - Uncertainty quantification for predictions of atomistic neural networks [0.0]
This paper explores the value of uncertainty quantification on predictions for trained neural networks (NNs) on quantum chemical reference data.
The architecture of the PhysNet NN was suitably modified and the resulting model was evaluated with different metrics to quantify calibration, quality of predictions, and whether prediction error and the predicted uncertainty can be correlated.
arXiv Detail & Related papers (2022-07-14T13:39:43Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Multidimensional Uncertainty-Aware Evidential Neural Networks [21.716045815385268]
We propose a novel uncertainty-aware evidential NN called WGAN-ENN (WENN) for solving an out-of-versa (OOD) detection problem.
We took a hybrid approach that combines Wasserstein Generative Adrial Network (WGAN) with ENNs to jointly train a model with prior knowledge of a certain class.
We demonstrated that the estimation of uncertainty by WENN can significantly help distinguish OOD samples from boundary samples.
arXiv Detail & Related papers (2020-12-26T04:28:56Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.