Conditional Neural Processes for Molecules
- URL: http://arxiv.org/abs/2210.09211v1
- Date: Mon, 17 Oct 2022 16:10:12 GMT
- Title: Conditional Neural Processes for Molecules
- Authors: Miguel Garcia-Ortegon, Andreas Bender and Sergio Bacallado
- Abstract summary: Neural processes (NPs) are models for transfer learning with properties reminiscent of Gaussian Processes (GPs)
This paper applies the conditional neural process (CNP) to DOCKSTRING, a dataset of docking scores for benchmarking ML models.
CNPs show competitive performance in few-shot learning tasks relative to supervised learning baselines common in QSAR modelling, as well as an alternative model for transfer learning based on pre-training and refining neural network regressors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural processes (NPs) are models for transfer learning with properties
reminiscent of Gaussian Processes (GPs). They are adept at modelling data
consisting of few observations of many related functions on the same input
space and are trained by minimizing a variational objective, which is
computationally much less expensive than the Bayesian updating required by GPs.
So far, most studies of NPs have focused on low-dimensional datasets which are
not representative of realistic transfer learning tasks. Drug discovery is one
application area that is characterized by datasets consisting of many chemical
properties or functions which are sparsely observed, yet depend on shared
features or representations of the molecular inputs. This paper applies the
conditional neural process (CNP) to DOCKSTRING, a dataset of docking scores for
benchmarking ML models. CNPs show competitive performance in few-shot learning
tasks relative to supervised learning baselines common in QSAR modelling, as
well as an alternative model for transfer learning based on pre-training and
refining neural network regressors. We present a Bayesian optimization
experiment which showcases the probabilistic nature of CNPs and discuss
shortcomings of the model in uncertainty quantification.
Related papers
- Active Learning with Fully Bayesian Neural Networks for Discontinuous and Nonstationary Data [0.0]
We introduce fully Bayesian Neural Networks (FBNNs) for active learning tasks in the'small data' regime.
FBNNs provide reliable predictive distributions, crucial for making informed decisions under uncertainty in the active learning setting.
Here, we assess the suitability and performance of FBNNs with the No-U-Turn Sampler for active learning tasks in the'small data' regime.
arXiv Detail & Related papers (2024-05-16T05:20:47Z) - Spectral Convolutional Conditional Neural Processes [4.52069311861025]
Conditional Neural Processes (CNPs) constitute a family of probabilistic models that harness the flexibility of neural networks to parameterize processes.
We propose Spectral Convolutional Conditional Neural Processes (SConvCNPs), a new addition to the NPs family that allows for more efficient representation of functions in the frequency domain.
arXiv Detail & Related papers (2024-04-19T21:13:18Z) - Gaussian Process Neural Additive Models [3.7969209746164325]
We propose a new subclass of Neural Additive Models (NAMs) that use a single-layer neural network construction of the Gaussian process via random Fourier features.
GP-NAMs have the advantage of a convex objective function and number of trainable parameters that grows linearly with feature dimensionality.
We show that GP-NAM achieves comparable or better performance in both classification and regression tasks with a large reduction in the number of parameters.
arXiv Detail & Related papers (2024-02-19T20:29:34Z) - Autoregressive Conditional Neural Processes [20.587835119831595]
Conditional neural processes (CNPs) are attractive meta-learning models.
They produce well-calibrated predictions and are trainable via a simple maximum likelihood procedure.
CNPs are unable to model dependencies in their predictions.
We propose to change how CNPs are deployed at test time, without any modifications to the model or training procedure.
arXiv Detail & Related papers (2023-03-25T13:34:12Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Practical Conditional Neural Processes Via Tractable Dependent
Predictions [25.15531845287349]
Conditional Neural Processes (CNPs) are meta-learning models which leverage the flexibility of deep learning to produce well-calibrated predictions.
CNPs do not produce correlated predictions, making them inappropriate for many estimation and decision making tasks.
We present a new class of Neural Process models that make correlated predictions and support exact maximum likelihood training.
arXiv Detail & Related papers (2022-03-16T17:37:41Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Bootstrapping Neural Processes [114.97111530885093]
Neural Processes (NPs) implicitly define a broad class of processes with neural networks.
NPs still rely on an assumption that uncertainty in processes is modeled by a single latent variable.
We propose the Boostrapping Neural Process (BNP), a novel extension of the NP family using the bootstrap.
arXiv Detail & Related papers (2020-08-07T02:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.