FGBERT: Function-Driven Pre-trained Gene Language Model for Metagenomics
- URL: http://arxiv.org/abs/2402.16901v1
- Date: Sat, 24 Feb 2024 13:13:17 GMT
- Title: FGBERT: Function-Driven Pre-trained Gene Language Model for Metagenomics
- Authors: ChenRui Duan, Zelin Zang, Yongjie Xu, Hang He, Zihan Liu, Zijia Song,
Ju-Sheng Zheng, Stan Z. Li
- Abstract summary: We introduce a protein-based gene representation as a context-aware and structure-relevant tokenizer.
MGM and TEM-CL constitute our novel metagenomic language model NAME, pre-trained on 100 million metagenomic sequences.
- Score: 35.47381119898764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Metagenomic data, comprising mixed multi-species genomes, are prevalent in
diverse environments like oceans and soils, significantly impacting human
health and ecological functions. However, current research relies on K-mer
representations, limiting the capture of structurally relevant gene contexts.
To address these limitations and further our understanding of complex
relationships between metagenomic sequences and their functions, we introduce a
protein-based gene representation as a context-aware and structure-relevant
tokenizer. Our approach includes Masked Gene Modeling (MGM) for gene
group-level pre-training, providing insights into inter-gene contextual
information, and Triple Enhanced Metagenomic Contrastive Learning (TEM-CL) for
gene-level pre-training to model gene sequence-function relationships. MGM and
TEM-CL constitute our novel metagenomic language model {\NAME}, pre-trained on
100 million metagenomic sequences. We demonstrate the superiority of our
proposed {\NAME} on eight datasets.
Related papers
- Semantically Rich Local Dataset Generation for Explainable AI in Genomics [0.716879432974126]
Black box deep learning models trained on genomic sequences excel at predicting the outcomes of different gene regulatory mechanisms.
We propose using Genetic Programming to generate datasets by evolving perturbations in sequences that contribute to their semantic diversity.
arXiv Detail & Related papers (2024-07-03T10:31:30Z) - Predicting Genetic Mutation from Whole Slide Images via Biomedical-Linguistic Knowledge Enhanced Multi-label Classification [119.13058298388101]
We develop a Biological-knowledge enhanced PathGenomic multi-label Transformer to improve genetic mutation prediction performances.
BPGT first establishes a novel gene encoder that constructs gene priors by two carefully designed modules.
BPGT then designs a label decoder that finally performs genetic mutation prediction by two tailored modules.
arXiv Detail & Related papers (2024-06-05T06:42:27Z) - VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling [60.91599380893732]
VQDNA is a general-purpose framework that renovates genome tokenization from the perspective of genome vocabulary learning.
By leveraging vector-quantized codebooks as learnable vocabulary, VQDNA can adaptively tokenize genomes into pattern-aware embeddings.
arXiv Detail & Related papers (2024-05-13T20:15:03Z) - Whole Genome Transformer for Gene Interaction Effects in Microbiome Habitat Specificity [3.972930262155919]
We propose a framework taking advantage of existing large models for gene vectorization to predict habitat specificity from entire microbial genome sequences.
We train and validate our approach on a large dataset of high quality microbiome genomes from different habitats.
arXiv Detail & Related papers (2024-05-09T09:34:51Z) - Efficient and Scalable Fine-Tune of Language Models for Genome
Understanding [49.606093223945734]
We present textscLingo: textscLanguage prefix ftextscIne-tuning for textscGentextscOmes.
Unlike DNA foundation models, textscLingo strategically leverages natural language foundation models' contextual cues.
textscLingo further accommodates numerous downstream fine-tune tasks by an adaptive rank sampling method.
arXiv Detail & Related papers (2024-02-12T21:40:45Z) - GENER: A Parallel Layer Deep Learning Network To Detect Gene-Gene
Interactions From Gene Expression Data [0.7660368798066375]
We introduce a parallel-layer deep learning network designed exclusively for the identification of gene-gene relationships using gene expression data.
Our model achieved an average AUROC score of 0.834 on the combined BioGRID&DREAM5 dataset, outperforming competing methods in predicting gene-gene interactions.
arXiv Detail & Related papers (2023-10-05T15:45:53Z) - MuSe-GNN: Learning Unified Gene Representation From Multimodal
Biological Graph Data [22.938437500266847]
We introduce a novel model called Multimodal Similarity Learning Graph Neural Network.
It combines Multimodal Machine Learning and Deep Graph Neural Networks to learn gene representations from single-cell sequencing and spatial transcriptomic data.
Our model efficiently produces unified gene representations for the analysis of gene functions, tissue functions, diseases, and species evolution.
arXiv Detail & Related papers (2023-09-29T13:33:53Z) - Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies [50.11449968854487]
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
arXiv Detail & Related papers (2023-09-26T03:59:21Z) - Feature extraction using Spectral Clustering for Gene Function
Prediction [0.4492444446637856]
This paper presents a novel in silico approach for to the annotation problem that combines cluster analysis and hierarchical multi-label classification.
The proposed approach is applied to a case study on Zea mays, one of the most dominant and productive crops in the world.
arXiv Detail & Related papers (2022-03-25T10:17:36Z) - Multi-modal Self-supervised Pre-training for Regulatory Genome Across
Cell Types [75.65676405302105]
We propose a simple yet effective approach for pre-training genome data in a multi-modal and self-supervised manner, which we call GeneBERT.
We pre-train our model on the ATAC-seq dataset with 17 million genome sequences.
arXiv Detail & Related papers (2021-10-11T12:48:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.