Integration of Pre-trained Protein Language Models into Geometric Deep
Learning Networks
- URL: http://arxiv.org/abs/2212.03447v2
- Date: Mon, 30 Oct 2023 02:13:44 GMT
- Title: Integration of Pre-trained Protein Language Models into Geometric Deep
Learning Networks
- Authors: Fang Wu, Lirong Wu, Dragomir Radev, Jinbo Xu, Stan Z. Li
- Abstract summary: We integrate knowledge learned by protein language models into several state-of-the-art geometric networks.
Our findings show an overall improvement of 20% over baselines.
Strong evidence indicates that the incorporation of protein language models' knowledge enhances geometric networks' capacity by a significant margin.
- Score: 68.90692290665648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Geometric deep learning has recently achieved great success in non-Euclidean
domains, and learning on 3D structures of large biomolecules is emerging as a
distinct research area. However, its efficacy is largely constrained due to the
limited quantity of structural data. Meanwhile, protein language models trained
on substantial 1D sequences have shown burgeoning capabilities with scale in a
broad range of applications. Several previous studies consider combining these
different protein modalities to promote the representation power of geometric
neural networks, but fail to present a comprehensive understanding of their
benefits. In this work, we integrate the knowledge learned by well-trained
protein language models into several state-of-the-art geometric networks and
evaluate a variety of protein representation learning benchmarks, including
protein-protein interface prediction, model quality assessment, protein-protein
rigid-body docking, and binding affinity prediction. Our findings show an
overall improvement of 20% over baselines. Strong evidence indicates that the
incorporation of protein language models' knowledge enhances geometric
networks' capacity by a significant margin and can be generalized to complex
tasks.
Related papers
- Geometric Self-Supervised Pretraining on 3D Protein Structures using Subgraphs [25.93347924265175]
We propose a novel self-supervised method to pretrain 3D graph neural networks on 3D protein structures.
By considering subgraphs and their relationships to the global protein structure, the model can learn to reason about these hierarchical levels of organization.
arXiv Detail & Related papers (2024-06-20T09:34:31Z) - xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering
the Language of Protein [76.18058946124111]
We propose a unified protein language model, xTrimoPGLM, to address protein understanding and generation tasks simultaneously.
xTrimoPGLM significantly outperforms other advanced baselines in 18 protein understanding benchmarks across four categories.
It can also generate de novo protein sequences following the principles of natural ones, and can perform programmable generation after supervised fine-tuning.
arXiv Detail & Related papers (2024-01-11T15:03:17Z) - Generative Pretrained Autoregressive Transformer Graph Neural Network
applied to the Analysis and Discovery of Novel Proteins [0.0]
We report a flexible language-model based deep learning strategy, applied here to solve complex forward and inverse problems in protein modeling.
The model is applied to predict secondary structure content (per-residue level and overall content), protein solubility, and sequencing tasks.
We find that adding additional tasks yields emergent synergies that the model exploits in improving overall performance.
arXiv Detail & Related papers (2023-05-07T12:30:24Z) - Structure-informed Language Models Are Protein Designers [69.70134899296912]
We present LM-Design, a generic approach to reprogramming sequence-based protein language models (pLMs)
We conduct a structural surgery on pLMs, where a lightweight structural adapter is implanted into pLMs and endows it with structural awareness.
Experiments show that our approach outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-02-03T10:49:52Z) - Contrastive Representation Learning for 3D Protein Structures [13.581113136149469]
We introduce a new representation learning framework for 3D protein structures.
Our framework uses unsupervised contrastive learning to learn meaningful representations of protein structures.
We show, how these representations can be used to solve a large variety of tasks, such as protein function prediction, protein fold classification, structural similarity prediction, and protein-ligand binding affinity prediction.
arXiv Detail & Related papers (2022-05-31T10:33:06Z) - Learning Geometrically Disentangled Representations of Protein Folding
Simulations [72.03095377508856]
This work focuses on learning a generative neural network on a structural ensemble of a drug-target protein.
Model tasks involve characterizing the distinct structural fluctuations of the protein bound to various drug molecules.
Results show that our geometric learning-based method enjoys both accuracy and efficiency for generating complex structural variations.
arXiv Detail & Related papers (2022-05-20T19:38:00Z) - Structure-aware Protein Self-supervised Learning [50.04673179816619]
We propose a novel structure-aware protein self-supervised learning method to capture structural information of proteins.
In particular, a well-designed graph neural network (GNN) model is pretrained to preserve the protein structural information.
We identify the relation between the sequential information in the protein language model and the structural information in the specially designed GNN model via a novel pseudo bi-level optimization scheme.
arXiv Detail & Related papers (2022-04-06T02:18:41Z) - PersGNN: Applying Topological Data Analysis and Geometric Deep Learning
to Structure-Based Protein Function Prediction [0.07340017786387766]
In this work, we isolate protein structure to make functional annotations for proteins in the Protein Data Bank.
We present PersGNN - an end-to-end trainable deep learning model that combines graph representation learning with topological data analysis.
arXiv Detail & Related papers (2020-10-30T02:24:35Z) - BERTology Meets Biology: Interpreting Attention in Protein Language
Models [124.8966298974842]
We demonstrate methods for analyzing protein Transformer models through the lens of attention.
We show that attention captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure.
We also present a three-dimensional visualization of the interaction between attention and protein structure.
arXiv Detail & Related papers (2020-06-26T21:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.