Unsupervised language models for disease variant prediction
- URL: http://arxiv.org/abs/2212.03979v1
- Date: Wed, 7 Dec 2022 22:28:13 GMT
- Title: Unsupervised language models for disease variant prediction
- Authors: Allan Zhou, Nicholas C. Landolfi, Daniel C. O'Neill
- Abstract summary: We find that a single protein LM trained on broad sequence datasets can score pathogenicity for any gene variant zero-shot.
We show that it achieves scoring performance comparable to the state of the art when evaluated on clinically labeled variants of disease-related genes.
- Score: 3.6942566104432886
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: There is considerable interest in predicting the pathogenicity of protein
variants in human genes. Due to the sparsity of high quality labels, recent
approaches turn to \textit{unsupervised} learning, using Multiple Sequence
Alignments (MSAs) to train generative models of natural sequence variation
within each gene. These generative models then predict variant likelihood as a
proxy to evolutionary fitness. In this work we instead combine this
evolutionary principle with pretrained protein language models (LMs), which
have already shown promising results in predicting protein structure and
function. Instead of training separate models per-gene, we find that a single
protein LM trained on broad sequence datasets can score pathogenicity for any
gene variant zero-shot, without MSAs or finetuning. We call this unsupervised
approach \textbf{VELM} (Variant Effect via Language Models), and show that it
achieves scoring performance comparable to the state of the art when evaluated
on clinically labeled variants of disease-related genes.
Related papers
- Generating Multi-Modal and Multi-Attribute Single-Cell Counts with CFGen [76.02070962797794]
We present Cell Flow for Generation, a flow-based conditional generative model for multi-modal single-cell counts.
Our results suggest improved recovery of crucial biological data characteristics while accounting for novel generative tasks.
arXiv Detail & Related papers (2024-07-16T14:05:03Z) - VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling [60.91599380893732]
VQDNA is a general-purpose framework that renovates genome tokenization from the perspective of genome vocabulary learning.
By leveraging vector-quantized codebooks as learnable vocabulary, VQDNA can adaptively tokenize genomes into pattern-aware embeddings.
arXiv Detail & Related papers (2024-05-13T20:15:03Z) - Diffusion Language Models Are Versatile Protein Learners [75.98083311705182]
This paper introduces diffusion protein language model (DPLM), a versatile protein language model that demonstrates strong generative and predictive capabilities for protein sequences.
We first pre-train scalable DPLMs from evolutionary-scale protein sequences within a generative self-supervised discrete diffusion probabilistic framework.
After pre-training, DPLM exhibits the ability to generate structurally plausible, novel, and diverse protein sequences for unconditional generation.
arXiv Detail & Related papers (2024-02-28T18:57:56Z) - xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering
the Language of Protein [76.18058946124111]
We propose a unified protein language model, xTrimoPGLM, to address protein understanding and generation tasks simultaneously.
xTrimoPGLM significantly outperforms other advanced baselines in 18 protein understanding benchmarks across four categories.
It can also generate de novo protein sequences following the principles of natural ones, and can perform programmable generation after supervised fine-tuning.
arXiv Detail & Related papers (2024-01-11T15:03:17Z) - ProPath: Disease-Specific Protein Language Model for Variant
Pathogenicity [11.414690866985474]
We propose a disease-specific textscprotein language model for variant textscpathogenicity, termed ProPath, to capture the pseudo-log-likelihood ratio in rare missense variants through a siamese network.
Our results demonstrate that ProPath surpasses the pre-trained ESM1b with an over $5%$ improvement in AUC across both datasets.
arXiv Detail & Related papers (2023-11-06T18:43:47Z) - Predicting protein variants with equivariant graph neural networks [0.0]
We compare the abilities of equivariant graph neural networks (EGNNs) and sequence-based approaches to identify promising amino-acid mutations.
Our proposed structural approach achieves a competitive performance to sequence-based approaches while being trained on significantly fewer molecules.
arXiv Detail & Related papers (2023-06-21T12:44:52Z) - PoET: A generative model of protein families as sequences-of-sequences [5.05828899601167]
We propose a generative model of whole protein families that learns to generate sets of related proteins as sequences-of-sequences.
PoET can be used as a retrieval-augmented language model to generate and score arbitrary modifications conditioned on any protein family of interest.
We show that PoET outperforms existing protein language models and evolutionary sequence models for variant function prediction across proteins of all depths.
arXiv Detail & Related papers (2023-06-09T16:06:36Z) - Reprogramming Pretrained Language Models for Protein Sequence
Representation Learning [68.75392232599654]
We propose Representation Learning via Dictionary Learning (R2DL), an end-to-end representation learning framework.
R2DL reprograms a pretrained English language model to learn the embeddings of protein sequences.
Our model can attain better accuracy and significantly improve the data efficiency by up to $105$ times over the baselines set by pretrained and standard supervised methods.
arXiv Detail & Related papers (2023-01-05T15:55:18Z) - Plug & Play Directed Evolution of Proteins with Gradient-based Discrete
MCMC [1.0499611180329804]
A long-standing goal of machine-learning-based protein engineering is to accelerate the discovery of novel mutations.
We introduce a sampling framework for evolving proteins in silico that supports mixing and matching a variety of unsupervised models.
By composing these models, we aim to improve our ability to evaluate unseen mutations and constrain search to regions of sequence space likely to contain functional proteins.
arXiv Detail & Related papers (2022-12-20T00:26:23Z) - ProGen2: Exploring the Boundaries of Protein Language Models [15.82416400246896]
We introduce a suite of protein language models, named ProGen2, that are scaled up to 6.4B parameters.
ProGen2 models show state-of-the-art performance in capturing the distribution of observed evolutionary sequences.
As large model sizes and raw numbers of protein sequences continue to become more widely accessible, our results suggest that a growing emphasis needs to be placed on the data distribution provided to a protein sequence model.
arXiv Detail & Related papers (2022-06-27T17:55:02Z) - rfPhen2Gen: A machine learning based association study of brain imaging
phenotypes to genotypes [71.1144397510333]
We learned machine learning models to predict SNPs using 56 brain imaging QTs.
SNPs within the known Alzheimer disease (AD) risk gene APOE had lowest RMSE for lasso and random forest.
Random forests identified additional SNPs that were not prioritized by the linear models but are known to be associated with brain-related disorders.
arXiv Detail & Related papers (2022-03-31T20:15:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.