AI challenges for predicting the impact of mutations on protein
stability
- URL: http://arxiv.org/abs/2111.04208v1
- Date: Mon, 8 Nov 2021 00:10:56 GMT
- Title: AI challenges for predicting the impact of mutations on protein
stability
- Authors: Fabrizio Pucci, Martin Schwersensky, Marianne Rooman
- Abstract summary: Many studies have been devoted to building new, more effective methods for predicting the impact of mutations on protein stability.
We discuss their features, algorithms, computational efficiency, and accuracy estimated on an independent test set.
We found that the accuracy of the predictors has stagnated at around 1 kcal/mol for over 15 years.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stability is a key ingredient of protein fitness and its modification through
targeted mutations has applications in various fields such as protein
engineering, drug design and deleterious variant interpretation. Many studies
have been devoted over the past decades to building new, more effective methods
for predicting the impact of mutations on protein stability, based on the
latest developments in artificial intelligence (AI). We discuss their features,
algorithms, computational efficiency, and accuracy estimated on an independent
test set. We focus on a critical analysis of their limitations, the recurrent
biases towards the training set, their generalizability and interpretability.
We found that the accuracy of the predictors has stagnated at around 1 kcal/mol
for over 15 years. We conclude by discussing the challenges that need to be
addressed to reach improved performance.
Related papers
- HERMES: Holographic Equivariant neuRal network model for Mutational Effect and Stability prediction [0.0]
HERMES is a 3D rotationally equivariant structure-based neural network model for mutational effect and stability prediction.
We present a suite of HERMES models, pre-trained with different strategies, and fine-tuned to predict the stability effect of mutations.
arXiv Detail & Related papers (2024-07-09T09:31:05Z) - Learning to Predict Mutation Effects of Protein-Protein Interactions by Microenvironment-aware Hierarchical Prompt Learning [78.38442423223832]
We develop a novel codebook pre-training task, namely masked microenvironment modeling.
We demonstrate superior performance and training efficiency over state-of-the-art pre-training-based methods in mutation effect prediction.
arXiv Detail & Related papers (2024-05-16T03:53:21Z) - Efficiently Predicting Protein Stability Changes Upon Single-point
Mutation with Large Language Models [51.57843608615827]
The ability to precisely predict protein thermostability is pivotal for various subfields and applications in biochemistry.
We introduce an ESM-assisted efficient approach that integrates protein sequence and structural features to predict the thermostability changes in protein upon single-point mutations.
arXiv Detail & Related papers (2023-12-07T03:25:49Z) - Multi-level Protein Representation Learning for Blind Mutational Effect
Prediction [5.207307163958806]
This paper introduces a novel pre-training framework that cascades sequential and geometric analyzers for protein structures.
It guides mutational directions toward desired traits by simulating natural selection on wild-type proteins.
We assess the proposed approach using a public database and two new databases for a variety of variant effect prediction tasks.
arXiv Detail & Related papers (2023-06-08T03:00:50Z) - Phylogeny-informed fitness estimation [58.720142291102135]
We propose phylogeny-informed fitness estimation, which exploits a population's phylogeny to estimate fitness evaluations.
Our results indicate that phylogeny-informed fitness estimation can mitigate the drawbacks of down-sampled lexicase.
This work serves as an initial step toward improving evolutionary algorithms by exploiting runtime phylogenetic analysis.
arXiv Detail & Related papers (2023-06-06T19:05:01Z) - Accurate and Definite Mutational Effect Prediction with Lightweight
Equivariant Graph Neural Networks [2.381587712372268]
This research introduces a lightweight graph representation learning scheme that efficiently analyzes the microenvironment of wild-type proteins.
Our solution offers a wide range of benefits that make it an ideal choice for the community.
arXiv Detail & Related papers (2023-04-13T09:51:49Z) - Structure-informed Language Models Are Protein Designers [69.70134899296912]
We present LM-Design, a generic approach to reprogramming sequence-based protein language models (pLMs)
We conduct a structural surgery on pLMs, where a lightweight structural adapter is implanted into pLMs and endows it with structural awareness.
Experiments show that our approach outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-02-03T10:49:52Z) - Reprogramming Pretrained Language Models for Protein Sequence
Representation Learning [68.75392232599654]
We propose Representation Learning via Dictionary Learning (R2DL), an end-to-end representation learning framework.
R2DL reprograms a pretrained English language model to learn the embeddings of protein sequences.
Our model can attain better accuracy and significantly improve the data efficiency by up to $105$ times over the baselines set by pretrained and standard supervised methods.
arXiv Detail & Related papers (2023-01-05T15:55:18Z) - SPLDExtraTrees: Robust machine learning approach for predicting kinase
inhibitor resistance [1.0674604700001966]
We propose a robust machine learning method, SPLDExtraTrees, which can accurately predict ligand binding affinity changes upon protein mutation.
The proposed method ranks training data following a specific scheme that starts with easy-to-learn samples.
Experiments substantiate the capability of the proposed method for predicting kinase inhibitor resistance under three scenarios.
arXiv Detail & Related papers (2021-11-15T09:07:45Z) - Variance Minimization in the Wasserstein Space for Invariant Causal
Prediction [72.13445677280792]
In this work, we show that the approach taken in ICP may be reformulated as a series of nonparametric tests that scales linearly in the number of predictors.
Each of these tests relies on the minimization of a novel loss function that is derived from tools in optimal transport theory.
We prove under mild assumptions that our method is able to recover the set of identifiable direct causes, and we demonstrate in our experiments that it is competitive with other benchmark causal discovery algorithms.
arXiv Detail & Related papers (2021-10-13T22:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.