FABind: Fast and Accurate Protein-Ligand Binding
- URL: http://arxiv.org/abs/2310.06763v5
- Date: Tue, 9 Jan 2024 03:08:47 GMT
- Title: FABind: Fast and Accurate Protein-Ligand Binding
- Authors: Qizhi Pei, Kaiyuan Gao, Lijun Wu, Jinhua Zhu, Yingce Xia, Shufang Xie,
Tao Qin, Kun He, Tie-Yan Liu, Rui Yan
- Abstract summary: $mathbfFABind$ is an end-to-end model that combines pocket prediction and docking to achieve accurate and fast protein-ligand binding.
Our proposed model demonstrates strong advantages in terms of effectiveness and efficiency compared to existing methods.
- Score: 127.7790493202716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling the interaction between proteins and ligands and accurately
predicting their binding structures is a critical yet challenging task in drug
discovery. Recent advancements in deep learning have shown promise in
addressing this challenge, with sampling-based and regression-based methods
emerging as two prominent approaches. However, these methods have notable
limitations. Sampling-based methods often suffer from low efficiency due to the
need for generating multiple candidate structures for selection. On the other
hand, regression-based methods offer fast predictions but may experience
decreased accuracy. Additionally, the variation in protein sizes often requires
external modules for selecting suitable binding pockets, further impacting
efficiency. In this work, we propose $\mathbf{FABind}$, an end-to-end model
that combines pocket prediction and docking to achieve accurate and fast
protein-ligand binding. $\mathbf{FABind}$ incorporates a unique ligand-informed
pocket prediction module, which is also leveraged for docking pose estimation.
The model further enhances the docking process by incrementally integrating the
predicted pocket to optimize protein-ligand binding, reducing discrepancies
between training and inference. Through extensive experiments on benchmark
datasets, our proposed $\mathbf{FABind}$ demonstrates strong advantages in
terms of effectiveness and efficiency compared to existing methods. Our code is
available at https://github.com/QizhiPei/FABind
Related papers
- The N-Grammys: Accelerating Autoregressive Inference with Learning-Free Batched Speculation [48.52206677611072]
Speculative decoding aims to speed up autoregressive generation of a language model by verifying in parallel the tokens generated by a smaller draft model.
We show that combinations of simple strategies can achieve significant inference speedups over different tasks.
arXiv Detail & Related papers (2024-11-06T09:23:50Z) - Exploiting Pre-trained Models for Drug Target Affinity Prediction with Nearest Neighbors [58.661454334877256]
Drug-Target binding Affinity (DTA) prediction is essential for drug discovery.
Despite the application of deep learning methods to DTA prediction, the achieved accuracy remain suboptimal.
We propose $k$NN-DTA, a non-representation embedding-based retrieval method adopted on a pre-trained DTA prediction model.
arXiv Detail & Related papers (2024-07-21T15:49:05Z) - PLA-SGCN: Protein-Ligand Binding Affinity Prediction by Integrating Similar Pairs and Semi-supervised Graph Convolutional Network [6.024776891570197]
This paper aims to integrate retrieved hard protein-ligand pairs in PLA prediction (i.e., task prediction step) using a semi-supervised graph convolutional network (GCN)
The results show that the proposed method significantly performs better than the comparable approaches.
arXiv Detail & Related papers (2024-05-13T03:27:02Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Smoothly Giving up: Robustness for Simple Models [30.56684535186692]
Examples of algorithms to train such models include logistic regression and boosting.
We use $Served-Served joint convex loss functions, which tunes between canonical convex loss functions, to robustly train such models.
We also provide results for boosting a COVID-19 dataset for logistic regression, highlighting the efficacy approach across multiple relevant domains.
arXiv Detail & Related papers (2023-02-17T19:48:11Z) - Surprises in adversarially-trained linear regression [12.33259114006129]
Adversarial training is one of the most effective approaches to defend against such examples.
We show that for linear regression problems, adversarial training can be formulated as a convex problem.
We show that for sufficiently many features or sufficiently small regularization parameters, the learned model perfectly interpolates the training data.
arXiv Detail & Related papers (2022-05-25T11:54:42Z) - EquiBind: Geometric Deep Learning for Drug Binding Structure Prediction [31.191844909335963]
Predicting how a drug-like molecule binds to a specific protein target is a core problem in drug discovery.
An extremely fast computational binding method would enable key applications such as fast virtual screening or drug engineering.
We present EquiBind, an SE(3)-equivariant geometric deep learning model performing direct-shot prediction of both i) the receptor binding location (blind docking) and ii) the ligand's bound pose and orientation.
arXiv Detail & Related papers (2022-02-07T16:26:05Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Improved Protein-ligand Binding Affinity Prediction with Structure-Based
Deep Fusion Inference [3.761791311908692]
Predicting accurate protein-ligand binding affinity is important in drug discovery.
Recent advances in the deep convolutional and graph neural network based approaches, the model performance depends on the input data representation.
We present fusion models to benefit from different feature representations of two neural network models to improve the binding affinity prediction.
arXiv Detail & Related papers (2020-05-17T22:26:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.