Generating Tertiary Protein Structures via an Interpretative Variational
Autoencoder
- URL: http://arxiv.org/abs/2004.07119v2
- Date: Wed, 16 Jun 2021 06:02:16 GMT
- Title: Generating Tertiary Protein Structures via an Interpretative Variational
Autoencoder
- Authors: Xiaojie Guo, Yuanqi Du, Sivani Tadepalli, Liang Zhao, and Amarda Shehu
- Abstract summary: This paper proposes and evaluates an alternative approach to generating functionally-relevant three-dimensional structures of a protein.
A comprehensive evaluation of several deep architectures shows the promise of generative models in directly revealing the latent space for sampling novel tertiary structures.
- Score: 16.554053012204182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much scientific enquiry across disciplines is founded upon a mechanistic
treatment of dynamic systems that ties form to function. A highly visible
instance of this is in molecular biology, where an important goal is to
determine functionally-relevant forms/structures that a protein molecule
employs to interact with molecular partners in the living cell. This goal is
typically pursued under the umbrella of stochastic optimization with algorithms
that optimize a scoring function. Research repeatedly shows that current
scoring function, though steadily improving, correlate weakly with molecular
activity. Inspired by recent momentum in generative deep learning, this paper
proposes and evaluates an alternative approach to generating
functionally-relevant three-dimensional structures of a protein. Though
typically deep generative models struggle with highly-structured data, the work
presented here circumvents this challenge via graph-generative models. A
comprehensive evaluation of several deep architectures shows the promise of
generative models in directly revealing the latent space for sampling novel
tertiary structures, as well as in highlighting axes/factors that carry
structural meaning and open the black box often associated with deep models.
The work presented here is a first step towards interpretative, deep generative
models becoming viable and informative complementary approaches to protein
structure prediction.
Related papers
- Life-Code: Central Dogma Modeling with Multi-Omics Sequence Unification [53.488387420073536]
Life-Code is a comprehensive framework that spans different biological functions.
Life-Code achieves state-of-the-art performance on various tasks across three omics.
arXiv Detail & Related papers (2025-02-11T06:53:59Z) - GENERator: A Long-Context Generative Genomic Foundation Model [66.46537421135996]
We present a generative genomic foundation model featuring a context length of 98k base pairs (bp) and 1.2B parameters.
The model adheres to the central dogma of molecular biology, accurately generating protein-coding sequences.
It also shows significant promise in sequence optimization, particularly through the prompt-responsive generation of promoter sequences.
arXiv Detail & Related papers (2025-02-11T05:39:49Z) - Multi-Scale Representation Learning for Protein Fitness Prediction [31.735234482320283]
Previous methods have primarily relied on self-supervised models trained on vast, unlabeled protein sequence or structure datasets.
We introduce the Sequence-Structure-Surface Fitness (S3F) model - a novel multimodal representation learning framework that integrates protein features across several scales.
Our approach combines sequence representations from a protein language model with Geometric Vector Perceptron networks encoding protein backbone and detailed surface topology.
arXiv Detail & Related papers (2024-12-02T04:28:10Z) - SFM-Protein: Integrative Co-evolutionary Pre-training for Advanced Protein Sequence Representation [97.99658944212675]
We introduce a novel pre-training strategy for protein foundation models.
It emphasizes the interactions among amino acid residues to enhance the extraction of both short-range and long-range co-evolutionary features.
Trained on a large-scale protein sequence dataset, our model demonstrates superior generalization ability.
arXiv Detail & Related papers (2024-10-31T15:22:03Z) - AUTODIFF: Autoregressive Diffusion Modeling for Structure-based Drug Design [16.946648071157618]
We propose a diffusion-based fragment-wise autoregressive generation model for structure-based drug design (SBDD)
We design a novel molecule assembly strategy named conformal motif that preserves the conformation of local structures of molecules first.
We then encode the interaction of the protein-ligand complex with an SE(3)-equivariant convolutional network and generate molecules motif-by-motif with diffusion modeling.
arXiv Detail & Related papers (2024-04-02T14:44:02Z) - A Systematic Study of Joint Representation Learning on Protein Sequences
and Structures [38.94729758958265]
Learning effective protein representations is critical in a variety of tasks in biology such as predicting protein functions.
Recent sequence representation learning methods based on Protein Language Models (PLMs) excel in sequence-based tasks, but their direct adaptation to tasks involving protein structures remains a challenge.
Our study undertakes a comprehensive exploration of joint protein representation learning by integrating a state-of-the-art PLM with distinct structure encoders.
arXiv Detail & Related papers (2023-03-11T01:24:10Z) - Modeling Molecular Structures with Intrinsic Diffusion Models [2.487445341407889]
This thesis proposes Intrinsic Diffusion Modeling.
It combines diffusion generative models with scientific knowledge about the flexibility of biological complexes.
We demonstrate the effectiveness of this approach on two fundamental tasks at the basis of computational chemistry and biology.
arXiv Detail & Related papers (2023-02-23T03:26:48Z) - State-specific protein-ligand complex structure prediction with a
multi-scale deep generative model [68.28309982199902]
We present NeuralPLexer, a computational approach that can directly predict protein-ligand complex structures.
Our study suggests that a data-driven approach can capture the structural cooperativity between proteins and small molecules, showing promise in accelerating the design of enzymes, drug molecules, and beyond.
arXiv Detail & Related papers (2022-09-30T01:46:38Z) - Retrieval-based Controllable Molecule Generation [63.44583084888342]
We propose a new retrieval-based framework for controllable molecule generation.
We use a small set of molecules to steer the pre-trained generative model towards synthesizing molecules that satisfy the given design criteria.
Our approach is agnostic to the choice of generative models and requires no task-specific fine-tuning.
arXiv Detail & Related papers (2022-08-23T17:01:16Z) - Protein model quality assessment using rotation-equivariant,
hierarchical neural networks [8.373439916313018]
We present a novel deep learning approach to assess the quality of a protein model.
Our method achieves state-of-the-art results in scoring protein models submitted to recent rounds of CASP.
arXiv Detail & Related papers (2020-11-27T05:03:53Z) - BERTology Meets Biology: Interpreting Attention in Protein Language
Models [124.8966298974842]
We demonstrate methods for analyzing protein Transformer models through the lens of attention.
We show that attention captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure.
We also present a three-dimensional visualization of the interaction between attention and protein structure.
arXiv Detail & Related papers (2020-06-26T21:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.