Deep Surrogate Docking: Accelerating Automated Drug Discovery with Graph
Neural Networks
- URL: http://arxiv.org/abs/2211.02720v1
- Date: Fri, 4 Nov 2022 19:36:02 GMT
- Title: Deep Surrogate Docking: Accelerating Automated Drug Discovery with Graph
Neural Networks
- Authors: Ryien Hosseini, Filippo Simini, Austin Clyde, Arvind Ramanathan
- Abstract summary: We introduce Deep Surrogate Docking (DSD), a framework that applies deep learning-based surrogate modeling to accelerate the docking process substantially.
We show that the DSD workflow combined with the FiLMv2 architecture provides a 9.496x speedup in molecule screening with a 3% recall error rate.
- Score: 0.9785311158871759
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The process of screening molecules for desirable properties is a key step in
several applications, ranging from drug discovery to material design. During
the process of drug discovery specifically, protein-ligand docking, or chemical
docking, is a standard in-silico scoring technique that estimates the binding
affinity of molecules with a specific protein target. Recently, however, as the
number of virtual molecules available to test has rapidly grown, these
classical docking algorithms have created a significant computational
bottleneck. We address this problem by introducing Deep Surrogate Docking
(DSD), a framework that applies deep learning-based surrogate modeling to
accelerate the docking process substantially. DSD can be interpreted as a
formalism of several earlier surrogate prefiltering techniques, adding novel
metrics and practical training practices. Specifically, we show that graph
neural networks (GNNs) can serve as fast and accurate estimators of classical
docking algorithms. Additionally, we introduce FiLMv2, a novel GNN architecture
which we show outperforms existing state-of-the-art GNN architectures,
attaining more accurate and stable performance by allowing the model to filter
out irrelevant information from data more efficiently. Through extensive
experimentation and analysis, we show that the DSD workflow combined with the
FiLMv2 architecture provides a 9.496x speedup in molecule screening with a <3%
recall error rate on an example docking task. Our open-source code is available
at https://github.com/ryienh/graph-dock.
Related papers
- GNNAS-Dock: Budget Aware Algorithm Selection with Graph Neural Networks for Molecular Docking [0.0]
This paper introduces GNNASDock, a novel Graph Graph Network (GNN)-based automated algorithm selection system for molecular docking in blind docking.
GNNs are accommodated to process the complex structural data of both situations and proteins.
They benefit from inherent graph-like properties to predict the performance of various docking algorithms under different conditions.
arXiv Detail & Related papers (2024-11-19T16:01:54Z) - Dockformer: A transformer-based molecular docking paradigm for large-scale virtual screening [29.886873241333433]
As the size of compound libraries increases, the complexity of traditional docking models increases.
Deep learning algorithms can provide data-driven research and development models to increase the speed of the docking process.
A novel deep learning-based docking approach named Dockformer is introduced in this study.
arXiv Detail & Related papers (2024-11-11T06:25:13Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Quantum-Inspired Machine Learning for Molecular Docking [9.16729372551085]
Molecular docking is an important tool for structure-based drug design, accelerating the efficiency of drug development.
Traditional docking by searching for possible binding sites and conformations is computationally complex and results poorly under blind docking.
We introduce quantum-inspired algorithms combining quantum properties and spatial optimization problems.
Our method outperforms traditional docking algorithms and deep learning-based algorithms over 10%.
arXiv Detail & Related papers (2024-01-22T09:16:41Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Rapid training of deep neural networks without skip connections or
normalization layers using Deep Kernel Shaping [46.083745557823164]
We identify the main pathologies present in deep networks that prevent them from training fast and generalizing to unseen data.
We show how these can be avoided by carefully controlling the "shape" of the network's kernel function.
arXiv Detail & Related papers (2021-10-05T00:49:36Z) - ParaVS: A Simple, Fast, Efficient and Flexible Graph Neural Network
Framework for Structure-Based Virtual Screening [2.5137859989323537]
We introduce a docking-based SBVS method and a deep learning non-docking-based method that is able to avoid the computational cost of the docking process.
The inference speed of ParaVS-ND is about 3.6e5 molecule / core-hour, while a conventional docking-based method is around 20, which is about 16000 times faster.
arXiv Detail & Related papers (2021-02-08T08:24:05Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Temporal Attention-Augmented Graph Convolutional Network for Efficient
Skeleton-Based Human Action Recognition [97.14064057840089]
Graphal networks (GCNs) have been very successful in modeling non-Euclidean data structures.
Most GCN-based action recognition methods use deep feed-forward networks with high computational complexity to process all skeletons in an action.
We propose a temporal attention module (TAM) for increasing the efficiency in skeleton-based action recognition.
arXiv Detail & Related papers (2020-10-23T08:01:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.