AD-GPT: Large Language Models in Alzheimer's Disease
- URL: http://arxiv.org/abs/2504.03071v1
- Date: Thu, 03 Apr 2025 22:49:10 GMT
- Title: AD-GPT: Large Language Models in Alzheimer's Disease
- Authors: Ziyu Liu, Lintao Tang, Zeliang Sun, Zhengliang Liu, Yanjun Lyu, Wei Ruan, Yangshuang Xu, Liang Shan, Jiyoon Shin, Xiaohe Chen, Dajiang Zhu, Tianming Liu, Rongjie Liu, Chao Huang,
- Abstract summary: Large language models (LLMs) have emerged as powerful tools for medical information retrieval.<n>But their accuracy and depth remain limited in specialized domains such as Alzheimer's disease (AD)<n>We introduce AD-GPT, a domain-specific generative pre-trained transformer designed to enhance the retrieval and analysis of AD-related genetic and neurobiological information.
- Score: 22.79214699749541
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large language models (LLMs) have emerged as powerful tools for medical information retrieval, yet their accuracy and depth remain limited in specialized domains such as Alzheimer's disease (AD), a growing global health challenge. To address this gap, we introduce AD-GPT, a domain-specific generative pre-trained transformer designed to enhance the retrieval and analysis of AD-related genetic and neurobiological information. AD-GPT integrates diverse biomedical data sources, including potential AD-associated genes, molecular genetic information, and key gene variants linked to brain regions. We develop a stacked LLM architecture combining Llama3 and BERT, optimized for four critical tasks in AD research: (1) genetic information retrieval, (2) gene-brain region relationship assessment, (3) gene-AD relationship analysis, and (4) brain region-AD relationship mapping. Comparative evaluations against state-of-the-art LLMs demonstrate AD-GPT's superior precision and reliability across these tasks, underscoring its potential as a robust and specialized AI tool for advancing AD research and biomarker discovery.
Related papers
- Explainable Graph-theoretical Machine Learning: with Application to Alzheimer's Disease Prediction [1.8719470717611726]
Alzheimer's disease (AD) affects 50 million people worldwide and is projected to overwhelm 152 million by 2050.
Here, we introduce explainable graph-theoretical machine learning (XGML) to construct individual metabolic brain graphs.
XGML builds metabolic brain graphs and uncovers subgraphs predictive of eight AD-related cognitive scores in new subjects.
arXiv Detail & Related papers (2025-03-20T16:13:09Z) - Survey and Improvement Strategies for Gene Prioritization with Large Language Models [61.24568051916653]
Large language models (LLMs) have performed well in medical exams, but their effectiveness in diagnosing rare genetic diseases has not been assessed.<n>We used multi-agent and Human Phenotype Ontology (HPO) classification to categorized patients based on phenotypes and solvability levels.<n>At baseline, GPT-4 outperformed other LLMs, achieving near 30% accuracy in ranking causal genes correctly.
arXiv Detail & Related papers (2025-01-30T23:03:03Z) - ADAM-1: AI and Bioinformatics for Alzheimer's Detection and Microbiome-Clinical Data Integrations [4.426051635422496]
The Alzheimer's Disease Analysis Model Generation 1 (ADAM) is a multi-agent large language model (LLM) framework designed to integrate and analyze multi-modal data.<n>ADAM-1 synthesizes insights from diverse data sources and contextualizes findings using literature-driven evidence.
arXiv Detail & Related papers (2025-01-14T18:56:33Z) - AlzheimerRAG: Multimodal Retrieval Augmented Generation for PubMed articles [2.4063592468412276]
Multimodal Retrieval-Augmented Generation (RAG) applications are promising for their capability to combine the strengths of information retrieval and generative models.<n>This paper introduces AlzheimerRAG, a Multimodal RAG pipeline tool for biomedical research use cases.
arXiv Detail & Related papers (2024-12-21T16:59:00Z) - A Self-guided Multimodal Approach to Enhancing Graph Representation Learning for Alzheimer's Diseases [45.59286036227576]
Graph neural networks (GNNs) are powerful machine learning models designed to handle irregularly structured data.
This paper presents a self-guided, knowledge-infused multimodal GNN that autonomously incorporates domain knowledge into the model development process.
Our approach conceptualizes domain knowledge as natural language and introduces a specialized multimodal GNN capable of leveraging this uncurated knowledge.
arXiv Detail & Related papers (2024-12-09T05:16:32Z) - MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models [49.765466293296186]
Recent progress in Medical Large Vision-Language Models (Med-LVLMs) has opened up new possibilities for interactive diagnostic tools.<n>Med-LVLMs often suffer from factual hallucination, which can lead to incorrect diagnoses.<n>We propose a versatile multimodal RAG system, MMed-RAG, designed to enhance the factuality of Med-LVLMs.
arXiv Detail & Related papers (2024-10-16T23:03:27Z) - GP-GPT: Large Language Model for Gene-Phenotype Mapping [44.12550855245415]
GP-GPT is the first specialized large language model for genetic-phenotype knowledge representation and genomics relation analysis.
Our model is fine-tuned in two stages on a comprehensive corpus composed of over 3,000,000 terms in genomics, genetics and scientific publications.
arXiv Detail & Related papers (2024-09-15T18:56:20Z) - An interpretable generative multimodal neuroimaging-genomics framework for decoding Alzheimer's disease [13.213387075528017]
Alzheimer's disease (AD) is the most prevalent form of dementia worldwide, encompassing a prodromal stage known as Mild Cognitive Impairment (MCI)<n>The objective of the work was to capture structural and functional modulations of brain structure and function relying on multimodal MRI data and Single Nucleotide Polymorphisms.
arXiv Detail & Related papers (2024-06-19T07:31:47Z) - Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies [50.11449968854487]
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
arXiv Detail & Related papers (2023-09-26T03:59:21Z) - Unsupervised Domain Adaptation for Dysarthric Speech Detection via
Domain Adversarial Training and Mutual Information Minimization [52.82138296332476]
This paper makes a first attempt to formulate cross-domain Dysarthric speech detection (DSD) as an unsupervised domain adaptation problem.
We propose a multi-task learning strategy, including dysarthria presence classification (DPC), domain adversarial training ( DAT) and mutual information minimization (MIM)
Experiments show that the incorporation of UDA attains absolute increases of 22.2% and 20.0% respectively in utterance-level weighted average recall and speaker-level accuracy.
arXiv Detail & Related papers (2021-06-18T13:34:36Z) - A Graph Gaussian Embedding Method for Predicting Alzheimer's Disease
Progression with MEG Brain Networks [59.15734147867412]
Characterizing the subtle changes of functional brain networks associated with Alzheimer's disease (AD) is important for early diagnosis and prediction of disease progression.
We developed a new deep learning method, termed multiple graph Gaussian embedding model (MG2G)
We used MG2G to detect the intrinsic latent dimensionality of MEG brain networks, predict the progression of patients with mild cognitive impairment (MCI) to AD, and identify brain regions with network alterations related to MCI.
arXiv Detail & Related papers (2020-05-08T02:29:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.