Improved prediction of ligand-protein binding affinities by meta-modeling
- URL: http://arxiv.org/abs/2310.03946v5
- Date: Mon, 21 Oct 2024 15:22:54 GMT
- Title: Improved prediction of ligand-protein binding affinities by meta-modeling
- Authors: Ho-Joon Lee, Prashant S. Emani, Mark B. Gerstein,
- Abstract summary: We develop a framework to integrate published force-field-based empirical docking and sequence-based deep learning models.
We show that many of our meta-models significantly improve affinity predictions over base models.
Our best meta-models achieve comparable performance to state-of-the-art deep learning tools exclusively based on 3D structures.
- Score: 1.3859669037499769
- License:
- Abstract: The accurate screening of candidate drug ligands against target proteins through computational approaches is of prime interest to drug development efforts. Such virtual screening depends in part on methods to predict the binding affinity between ligands and proteins. Many computational models for binding affinity prediction have been developed, but with varying results across targets. Given that ensembling or meta-modeling approaches have shown great promise in reducing model-specific biases, we develop a framework to integrate published force-field-based empirical docking and sequence-based deep learning models. In building this framework, we evaluate many combinations of individual base models, training databases, and several meta-modeling approaches. We show that many of our meta-models significantly improve affinity predictions over base models. Our best meta-models achieve comparable performance to state-of-the-art deep learning tools exclusively based on 3D structures, while allowing for improved database scalability and flexibility through the explicit inclusion of features such as physicochemical properties or molecular descriptors. We further demonstrate improved generalization capability by our models using a large-scale benchmark of affinity prediction as well as a virtual screening application benchmark. Overall, we demonstrate that diverse modeling approaches can be ensembled together to gain meaningful improvement in binding affinity prediction.
Related papers
- Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - PerturBench: Benchmarking Machine Learning Models for Cellular Perturbation Analysis [14.526536510805755]
We present a comprehensive framework for predicting the effects of perturbations in single cells, designed to standardize benchmarking in this rapidly evolving field.
Our framework, PerturBench, includes a user-friendly platform, diverse datasets, metrics for fair model comparison, and detailed performance analysis.
arXiv Detail & Related papers (2024-08-20T07:40:20Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Has Your Pretrained Model Improved? A Multi-head Posterior Based
Approach [25.927323251675386]
We leverage the meta-features associated with each entity as a source of worldly knowledge and employ entity representations from the models.
We propose using the consistency between these representations and the meta-features as a metric for evaluating pre-trained models.
Our method's effectiveness is demonstrated across various domains, including models with relational datasets, large language models and image models.
arXiv Detail & Related papers (2024-01-02T17:08:26Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - The Best of Both Worlds: Combining Model-based and Nonparametric
Approaches for 3D Human Body Estimation [20.797162096899154]
We propose a framework for estimating model parameters from global image features.
A dense map prediction module explicitly establishes the dense UV correspondence between the image evidence and each part of the body model.
In inverse kinematics module refines the key point prediction and generates a posed template mesh.
A UV inpainting module relies on the corresponding feature, prediction and the posed template, and completes the predictions of occluded body shape.
arXiv Detail & Related papers (2022-05-01T16:39:09Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - An Extended Multi-Model Regression Approach for Compressive Strength
Prediction and Optimization of a Concrete Mixture [0.0]
A model based evaluation of concrete compressive strength is of high value, both for the purpose of strength prediction and the mixture optimization.
We take a further step towards improving the accuracy of the prediction model via the weighted combination of multiple regression methods.
A proposed (GA)-based mixture optimization is proposed, building on the obtained multi-regression model.
arXiv Detail & Related papers (2021-06-13T16:10:32Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Improved Protein-ligand Binding Affinity Prediction with Structure-Based
Deep Fusion Inference [3.761791311908692]
Predicting accurate protein-ligand binding affinity is important in drug discovery.
Recent advances in the deep convolutional and graph neural network based approaches, the model performance depends on the input data representation.
We present fusion models to benefit from different feature representations of two neural network models to improve the binding affinity prediction.
arXiv Detail & Related papers (2020-05-17T22:26:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.