Directed Weight Neural Networks for Protein Structure Representation
Learning
- URL: http://arxiv.org/abs/2201.13299v1
- Date: Fri, 28 Jan 2022 13:41:56 GMT
- Title: Directed Weight Neural Networks for Protein Structure Representation
Learning
- Authors: Jiahan Li, Shitong Luo, Congyue Deng, Chaoran Cheng, Jiaqi Guan,
Leonidas Guibas, Jian Peng, Jianzhu Ma
- Abstract summary: We propose the Directed Weight Neural Network for better capturing geometric relations among different amino acids.
Our new framework supports a rich set of geometric operations on both classical and SO(3)--representation features.
It achieves state-of-the-art performance on various computational biology applications related to protein 3D structures.
- Score: 16.234990522729348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A protein performs biological functions by folding to a particular 3D
structure. To accurately model the protein structures, both the overall
geometric topology and local fine-grained relations between amino acids (e.g.
side-chain torsion angles and inter-amino-acid orientations) should be
carefully considered. In this work, we propose the Directed Weight Neural
Network for better capturing geometric relations among different amino acids.
Extending a single weight from a scalar to a 3D directed vector, our new
framework supports a rich set of geometric operations on both classical and
SO(3)--representation features, on top of which we construct a perceptron unit
for processing amino-acid information. In addition, we introduce an equivariant
message passing paradigm on proteins for plugging the directed weight
perceptrons into existing Graph Neural Networks, showing superior versatility
in maintaining SO(3)-equivariance at the global scale. Experiments show that
our network has remarkably better expressiveness in representing geometric
relations in comparison to classical neural networks and the (globally)
equivariant networks. It also achieves state-of-the-art performance on various
computational biology applications related to protein 3D structures.
Related papers
- A Hitchhiker's Guide to Geometric GNNs for 3D Atomic Systems [87.30652640973317]
Recent advances in computational modelling of atomic systems represent them as geometric graphs with atoms embedded as nodes in 3D Euclidean space.
Geometric Graph Neural Networks have emerged as the preferred machine learning architecture powering applications ranging from protein structure prediction to molecular simulations and material generation.
This paper provides a comprehensive and self-contained overview of the field of Geometric GNNs for 3D atomic systems.
arXiv Detail & Related papers (2023-12-12T18:44:19Z) - EquiPocket: an E(3)-Equivariant Geometric Graph Neural Network for Ligand Binding Site Prediction [49.674494450107005]
Predicting the binding sites of target proteins plays a fundamental role in drug discovery.
Most existing deep-learning methods consider a protein as a 3D image by spatially clustering its atoms into voxels.
This work proposes EquiPocket, an E(3)-equivariant Graph Neural Network (GNN) for binding site prediction.
arXiv Detail & Related papers (2023-02-23T17:18:26Z) - Integration of Pre-trained Protein Language Models into Geometric Deep
Learning Networks [68.90692290665648]
We integrate knowledge learned by protein language models into several state-of-the-art geometric networks.
Our findings show an overall improvement of 20% over baselines.
Strong evidence indicates that the incorporation of protein language models' knowledge enhances geometric networks' capacity by a significant margin.
arXiv Detail & Related papers (2022-12-07T04:04:04Z) - Learning Geometrically Disentangled Representations of Protein Folding
Simulations [72.03095377508856]
This work focuses on learning a generative neural network on a structural ensemble of a drug-target protein.
Model tasks involve characterizing the distinct structural fluctuations of the protein bound to various drug molecules.
Results show that our geometric learning-based method enjoys both accuracy and efficiency for generating complex structural variations.
arXiv Detail & Related papers (2022-05-20T19:38:00Z) - Independent SE(3)-Equivariant Models for End-to-End Rigid Protein
Docking [57.2037357017652]
We tackle rigid body protein-protein docking, i.e., computationally predicting the 3D structure of a protein-protein complex from the individual unbound structures.
We design a novel pairwise-independent SE(3)-equivariant graph matching network to predict the rotation and translation to place one of the proteins at the right docked position.
Our model, named EquiDock, approximates the binding pockets and predicts the docking poses using keypoint matching and alignment.
arXiv Detail & Related papers (2021-11-15T18:46:37Z) - G-VAE, a Geometric Convolutional VAE for ProteinStructure Generation [41.66010308405784]
We introduce a joint geometric-neural networks approach for comparing, deforming and generating 3D protein structures.
Our method is able to generate plausible structures, different from the structures in the training data.
arXiv Detail & Related papers (2021-06-22T16:52:48Z) - Spherical convolutions on molecular graphs for protein model quality
assessment [0.0]
In this work, we propose Spherical Graph Convolutional Network (S-GCN) that processes 3D models of proteins represented as molecular graphs.
Within the framework of the protein model quality assessment problem, we demonstrate that the proposed spherical convolution method significantly improves the quality of model assessment.
arXiv Detail & Related papers (2020-11-16T14:22:36Z) - Learning from Protein Structure with Geometric Vector Perceptrons [6.5360079597553025]
We introduce geometric vector perceptrons, which extend standard dense layers to operate on collections of Euclidean vectors.
We demonstrate our approach on two important problems in learning from protein structure: model quality assessment and computational protein design.
arXiv Detail & Related papers (2020-09-03T01:54:25Z) - BERTology Meets Biology: Interpreting Attention in Protein Language
Models [124.8966298974842]
We demonstrate methods for analyzing protein Transformer models through the lens of attention.
We show that attention captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure.
We also present a three-dimensional visualization of the interaction between attention and protein structure.
arXiv Detail & Related papers (2020-06-26T21:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.