Uni-QSAR: an Auto-ML Tool for Molecular Property Prediction
- URL: http://arxiv.org/abs/2304.12239v1
- Date: Mon, 24 Apr 2023 16:29:08 GMT
- Title: Uni-QSAR: an Auto-ML Tool for Molecular Property Prediction
- Authors: Zhifeng Gao, Xiaohong Ji, Guojiang Zhao, Hongshuai Wang, Hang Zheng,
Guolin Ke, Linfeng Zhang
- Abstract summary: We propose Uni-QSAR, a powerful Auto-ML tool for molecule property prediction tasks.
Uni-QSAR combines molecular representation learning (MRL) of 1D sequential tokens, 2D topology graphs, and 3D conformers with pretraining models to leverage rich representation from large-scale unlabeled data.
- Score: 15.312021665242154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently deep learning based quantitative structure-activity relationship
(QSAR) models has shown surpassing performance than traditional methods for
property prediction tasks in drug discovery. However, most DL based QSAR models
are restricted to limited labeled data to achieve better performance, and also
are sensitive to model scale and hyper-parameters. In this paper, we propose
Uni-QSAR, a powerful Auto-ML tool for molecule property prediction tasks.
Uni-QSAR combines molecular representation learning (MRL) of 1D sequential
tokens, 2D topology graphs, and 3D conformers with pretraining models to
leverage rich representation from large-scale unlabeled data. Without any
manual fine-tuning or model selection, Uni-QSAR outperforms SOTA in 21/22 tasks
of the Therapeutic Data Commons (TDC) benchmark under designed parallel
workflow, with an average performance improvement of 6.09\%. Furthermore, we
demonstrate the practical usefulness of Uni-QSAR in drug discovery domains.
Related papers
- On Machine Learning Approaches for Protein-Ligand Binding Affinity Prediction [2.874893537471256]
This study evaluates the performance of classical tree-based models and advanced neural networks in protein-ligand binding affinity prediction.
We show that combining 2D and 3D model strengths improves active learning outcomes beyond current state-of-the-art approaches.
arXiv Detail & Related papers (2024-07-15T13:06:00Z) - Machine Learning Models for Accurately Predicting Properties of CsPbCl3 Perovskite Quantum Dots [0.0]
Perovskite Quantum Dots (PQDs) have a promising future for several applications due to their unique properties.
This study investigates the effectiveness of Machine Learning (ML) in predicting the size, absorbance (1S abs) and photoluminescence (PL) properties of $mathrmCsPbCl_3$ PQDs.
arXiv Detail & Related papers (2024-06-20T19:08:54Z) - The Role of Model Architecture and Scale in Predicting Molecular Properties: Insights from Fine-Tuning RoBERTa, BART, and LLaMA [0.0]
This study introduces a systematic framework to compare the efficacy of Large Language Models (LLMs) for fine-tuning across various cheminformatics tasks.
We assessed three well-known models-RoBERTa, BART, and LLaMA-on their ability to predict molecular properties.
We found that LLaMA-based models generally offered the lowest validation loss, suggesting their superior adaptability across tasks and scales.
arXiv Detail & Related papers (2024-05-02T02:20:12Z) - Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - Development and Evaluation of Conformal Prediction Methods for QSAR [0.5161531917413706]
The quantitative structure-activity relationship (QSAR) regression model is a commonly used technique for predicting biological activities of compounds.
Most machine learning (ML) algorithms that achieve superior predictive performance require some add-on methods for estimating uncertainty of their prediction.
Conformal prediction (CP) is a promising approach. It is agnostic to the prediction algorithm and can produce valid prediction intervals under some weak assumptions on the data distribution.
arXiv Detail & Related papers (2023-04-03T13:41:09Z) - Cross-Modal Fine-Tuning: Align then Refine [83.37294254884446]
ORCA is a cross-modal fine-tuning framework that extends the applicability of a single large-scale pretrained model to diverse modalities.
We show that ORCA obtains state-of-the-art results on 3 benchmarks containing over 60 datasets from 12 modalities.
arXiv Detail & Related papers (2023-02-11T16:32:28Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - Comparing Test Sets with Item Response Theory [53.755064720563]
We evaluate 29 datasets using predictions from 18 pretrained Transformer models on individual test examples.
We find that Quoref, HellaSwag, and MC-TACO are best suited for distinguishing among state-of-the-art models.
We also observe span selection task format, which is used for QA datasets like QAMR or SQuAD2.0, is effective in differentiating between strong and weak models.
arXiv Detail & Related papers (2021-06-01T22:33:53Z) - Improving AMR Parsing with Sequence-to-Sequence Pre-training [39.33133978535497]
In this paper, we focus on sequence-to-sequence (seq2seq) AMR parsing.
We propose a seq2seq pre-training approach to build pre-trained models in both single and joint way.
Experiments show that both the single and joint pre-trained models significantly improve the performance.
arXiv Detail & Related papers (2020-10-05T04:32:47Z) - Ensemble Transfer Learning for the Prediction of Anti-Cancer Drug
Response [49.86828302591469]
In this paper, we apply transfer learning to the prediction of anti-cancer drug response.
We apply the classic transfer learning framework that trains a prediction model on the source dataset and refines it on the target dataset.
The ensemble transfer learning pipeline is implemented using LightGBM and two deep neural network (DNN) models with different architectures.
arXiv Detail & Related papers (2020-05-13T20:29:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.