Substrate Scope Contrastive Learning: Repurposing Human Bias to Learn
Atomic Representations
- URL: http://arxiv.org/abs/2402.16882v1
- Date: Mon, 19 Feb 2024 02:21:20 GMT
- Title: Substrate Scope Contrastive Learning: Repurposing Human Bias to Learn
Atomic Representations
- Authors: Wenhao Gao, Priyanka Raghavan, Ron Shprints, Connor W. Coley
- Abstract summary: We introduce a novel pre-training strategy, substrate scope contrastive learning, which learns atomic representations tailored to chemical reactivity.
We focus on 20,798 aryl halides in the CAS Content Collection spanning thousands of publications to learn a representation of aryl halide reactivity.
This work not only presents a chemistry-tailored neural network pre-training strategy to learn reactivity-aligned atomic representations, but also marks a first-of-its-kind approach to benefit from the human bias in substrate scope design.
- Score: 14.528429119352328
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning molecular representation is a critical step in molecular machine
learning that significantly influences modeling success, particularly in
data-scarce situations. The concept of broadly pre-training neural networks has
advanced fields such as computer vision, natural language processing, and
protein engineering. However, similar approaches for small organic molecules
have not achieved comparable success. In this work, we introduce a novel
pre-training strategy, substrate scope contrastive learning, which learns
atomic representations tailored to chemical reactivity. This method considers
the grouping of substrates and their yields in published substrate scope tables
as a measure of their similarity or dissimilarity in terms of chemical
reactivity. We focus on 20,798 aryl halides in the CAS Content Collection
spanning thousands of publications to learn a representation of aryl halide
reactivity. We validate our pre-training approach through both intuitive
visualizations and comparisons to traditional reactivity descriptors and
physical organic chemistry principles. The versatility of these embeddings is
further evidenced in their application to yield prediction, regioselectivity
prediction, and the diverse selection of new substrates. This work not only
presents a chemistry-tailored neural network pre-training strategy to learn
reactivity-aligned atomic representations, but also marks a first-of-its-kind
approach to benefit from the human bias in substrate scope design.
Related papers
- Challenging reaction prediction models to generalize to novel chemistry [12.33727805025678]
We report a series of evaluations of a prototypical SMILES-based deep learning model.
First, we illustrate how performance on randomly sampled datasets is overly optimistic compared to performance when generalizing to new patents or new authors.
Second, we conduct time splits that evaluate how models perform when tested on reactions published in years after those in their training set, mimicking real-world deployment.
arXiv Detail & Related papers (2025-01-11T23:49:14Z) - Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - A large dataset curation and benchmark for drug target interaction [0.7699646945563469]
Bioactivity data plays a key role in drug discovery and repurposing.
We propose a way to standardize and represent efficiently a very large dataset curated from multiple public sources.
arXiv Detail & Related papers (2024-01-30T17:06:25Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation [53.27596811146316]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - MetaRF: Differentiable Random Forest for Reaction Yield Prediction with
a Few Trails [58.47364143304643]
In this paper, we focus on the reaction yield prediction problem.
We first put forth MetaRF, an attention-based differentiable random forest model specially designed for the few-shot yield prediction.
To improve the few-shot learning performance, we further introduce a dimension-reduction based sampling method.
arXiv Detail & Related papers (2022-08-22T06:40:13Z) - Selection of pseudo-annotated data for adverse drug reaction
classification across drug groups [12.259552039796027]
We assess the robustness of state-of-the-art neural architectures across different drug groups.
We investigate several strategies to use pseudo-labeled data in addition to a manually annotated train set.
arXiv Detail & Related papers (2021-11-24T13:11:05Z) - Machine learning modeling of family wide enzyme-substrate specificity
screens [2.276367922551686]
Biocatalysis is a promising approach to synthesize pharmaceuticals, complex natural products, and commodity chemicals at scale.
The adoption of biocatalysis is limited by our ability to select enzymes that will catalyze their natural chemical transformation on non-natural substrates.
arXiv Detail & Related papers (2021-09-08T19:44:42Z) - RetCL: A Selection-based Approach for Retrosynthesis via Contrastive
Learning [107.64562550844146]
Retrosynthesis is an emerging research area of deep learning.
We propose a new approach that reformulating retrosynthesis into a selection problem of reactants from a candidate set of commercially available molecules.
For learning the score functions, we also propose a novel contrastive training scheme with hard negative mining.
arXiv Detail & Related papers (2021-05-03T12:47:57Z) - Chemical Property Prediction Under Experimental Biases [26.407895054724452]
This study focuses on mitigating bias in the experimental datasets.
We adopted two techniques from causal inference combined with graph neural networks that can represent molecular structures.
The experimental results in four possible bias scenarios indicated that the inverse propensity scoring-based method and the counter-factual regression-based method made solid improvements.
arXiv Detail & Related papers (2020-09-18T08:40:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.