Learning the shape of protein micro-environments with a holographic
convolutional neural network
- URL: http://arxiv.org/abs/2211.02936v1
- Date: Sat, 5 Nov 2022 16:29:15 GMT
- Title: Learning the shape of protein micro-environments with a holographic
convolutional neural network
- Authors: Michael N. Pun, Andrew Ivanov, Quinn Bellamy, Zachary Montague, Colin
LaMont, Philip Bradley, Jakub Otwinowski, Armita Nourmohammad
- Abstract summary: We introduce Holographic Convolutional Neural Network (H-CNN) for proteins.
H-CNN is a physically motivated machine learning approach to model amino acid preferences in protein structures.
It accurately predicts the impact of mutations on protein function, including stability and binding of protein complexes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Proteins play a central role in biology from immune recognition to brain
activity. While major advances in machine learning have improved our ability to
predict protein structure from sequence, determining protein function from
structure remains a major challenge. Here, we introduce Holographic
Convolutional Neural Network (H-CNN) for proteins, which is a physically
motivated machine learning approach to model amino acid preferences in protein
structures. H-CNN reflects physical interactions in a protein structure and
recapitulates the functional information stored in evolutionary data. H-CNN
accurately predicts the impact of mutations on protein function, including
stability and binding of protein complexes. Our interpretable computational
model for protein structure-function maps could guide design of novel proteins
with desired function.
Related papers
- Computational Protein Science in the Era of Large Language Models (LLMs) [54.35488233989787]
Computational protein science is dedicated to revealing knowledge and developing applications within the protein sequence-structure-function paradigm.
Recently, Language Models (pLMs) have emerged as a milestone in AI due to their unprecedented language processing & generalization capability.
arXiv Detail & Related papers (2025-01-17T16:21:18Z) - Long-context Protein Language Model [76.95505296417866]
Self-supervised training of language models (LMs) has seen great success for protein sequences in learning meaningful representations and for generative drug design.
Most protein LMs are based on the Transformer architecture trained on individual proteins with short context lengths.
We propose LC-PLM based on an alternative protein LM architecture, BiMamba-S, built off selective structured state-space models.
We also introduce its graph-contextual variant, LC-PLM-G, which contextualizes protein-protein interaction graphs for a second stage of training.
arXiv Detail & Related papers (2024-10-29T16:43:28Z) - ProteinRPN: Towards Accurate Protein Function Prediction with Graph-Based Region Proposals [4.525216077859531]
We introduce the Protein Region Proposal Network (ProteinRPN) for accurate protein function prediction.
ProteinRPN identifies potential functional regions (anchors) which are refined through the hierarchy-aware node drop pooling layer.
The representations of the predicted functional nodes are enriched using attention mechanisms and fed into a Graph Multiset Transformer.
arXiv Detail & Related papers (2024-09-01T04:40:04Z) - Advanced atom-level representations for protein flexibility prediction utilizing graph neural networks [0.0]
We propose graph neural networks (GNNs) to learn protein representations at the atomic level and predict B-factors from protein 3D structures.
The Meta-GNN model achieves a correlation coefficient of 0.71 on a large and diverse test set of over 4k proteins.
arXiv Detail & Related papers (2024-08-22T16:15:13Z) - GOProteinGNN: Leveraging Protein Knowledge Graphs for Protein Representation Learning [27.192150057715835]
GOProteinGNN is a novel architecture that enhances protein language models by integrating protein knowledge graph information.
Our approach allows for the integration of information at both the individual amino acid level and the entire protein level, enabling a comprehensive and effective learning process.
arXiv Detail & Related papers (2024-07-31T17:54:22Z) - NaNa and MiGu: Semantic Data Augmentation Techniques to Enhance Protein Classification in Graph Neural Networks [60.48306899271866]
We propose novel semantic data augmentation methods to incorporate backbone chemical and side-chain biophysical information into protein classification tasks.
Specifically, we leverage molecular biophysical, secondary structure, chemical bonds, andionic features of proteins to facilitate classification tasks.
arXiv Detail & Related papers (2024-03-21T13:27:57Z) - Structure-Informed Protein Language Model [38.019425619750265]
We introduce the integration of remote homology detection to distill structural information into protein language models.
We evaluate the impact of this structure-informed training on downstream protein function prediction tasks.
arXiv Detail & Related papers (2024-02-07T09:32:35Z) - A Latent Diffusion Model for Protein Structure Generation [50.74232632854264]
We propose a latent diffusion model that can reduce the complexity of protein modeling.
We show that our method can effectively generate novel protein backbone structures with high designability and efficiency.
arXiv Detail & Related papers (2023-05-06T19:10:19Z) - Learning Geometrically Disentangled Representations of Protein Folding
Simulations [72.03095377508856]
This work focuses on learning a generative neural network on a structural ensemble of a drug-target protein.
Model tasks involve characterizing the distinct structural fluctuations of the protein bound to various drug molecules.
Results show that our geometric learning-based method enjoys both accuracy and efficiency for generating complex structural variations.
arXiv Detail & Related papers (2022-05-20T19:38:00Z) - Structure-aware Protein Self-supervised Learning [50.04673179816619]
We propose a novel structure-aware protein self-supervised learning method to capture structural information of proteins.
In particular, a well-designed graph neural network (GNN) model is pretrained to preserve the protein structural information.
We identify the relation between the sequential information in the protein language model and the structural information in the specially designed GNN model via a novel pseudo bi-level optimization scheme.
arXiv Detail & Related papers (2022-04-06T02:18:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.