Large language models improve Alzheimer's disease diagnosis using
multi-modality data
- URL: http://arxiv.org/abs/2305.19280v1
- Date: Fri, 26 May 2023 18:42:19 GMT
- Title: Large language models improve Alzheimer's disease diagnosis using
multi-modality data
- Authors: Yingjie Feng, Jun Wang, Xianfeng Gu, Xiaoyin Xu, and Min Zhang
- Abstract summary: Non-imaging patient data such as patient information, genetic data, medication information, cognitive and memory tests also play a very important role in diagnosis.
We use a currently very popular pre-trained large language model (LLM) to enhance the model's ability to utilize non-image data.
- Score: 19.535491994272245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In diagnosing challenging conditions such as Alzheimer's disease (AD),
imaging is an important reference. Non-imaging patient data such as patient
information, genetic data, medication information, cognitive and memory tests
also play a very important role in diagnosis. Effect. However, limited by the
ability of artificial intelligence models to mine such information, most of the
existing models only use multi-modal image data, and cannot make full use of
non-image data. We use a currently very popular pre-trained large language
model (LLM) to enhance the model's ability to utilize non-image data, and
achieved SOTA results on the ADNI dataset.
Related papers
- Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Multi-OCT-SelfNet: Integrating Self-Supervised Learning with Multi-Source Data Fusion for Enhanced Multi-Class Retinal Disease Classification [2.5091334993691206]
Development of a robust deep-learning model for retinal disease diagnosis requires a substantial dataset for training.
The capacity to generalize effectively on smaller datasets remains a persistent challenge.
We've combined a wide range of data sources to improve performance and generalization to new data.
arXiv Detail & Related papers (2024-09-17T17:22:35Z) - Toward Robust Early Detection of Alzheimer's Disease via an Integrated Multimodal Learning Approach [5.9091823080038814]
Alzheimer's Disease (AD) is a complex neurodegenerative disorder marked by memory loss, executive dysfunction, and personality changes.
This study introduces an advanced multimodal classification model that integrates clinical, cognitive, neuroimaging, and EEG data.
arXiv Detail & Related papers (2024-08-29T08:26:00Z) - A Disease-Specific Foundation Model Using Over 100K Fundus Images: Release and Validation for Abnormality and Multi-Disease Classification on Downstream Tasks [0.0]
We developed a Fundus-Specific Pretrained Model (Image+Fundus), a supervised artificial intelligence model trained to detect abnormalities in fundus images.
A total of 57,803 images were used to develop this pretrained model, which achieved superior performance across various downstream tasks.
arXiv Detail & Related papers (2024-08-16T15:03:06Z) - Unconditional Latent Diffusion Models Memorize Patient Imaging Data: Implications for Openly Sharing Synthetic Data [2.04850174048739]
We train latent diffusion models on CT, MR, and X-ray datasets for synthetic data generation.
We then detect the amount of training data memorized utilizing our novel self-supervised copy detection approach.
Our findings show a surprisingly high degree of patient data memorization across all datasets.
arXiv Detail & Related papers (2024-02-01T22:58:21Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Robust Alzheimer's Progression Modeling using Cross-Domain
Self-Supervised Deep Learning [3.0948853907734044]
We develop a cross-domain self-supervised learning approach for disease prognostic modeling as a regression problem using medical images as input.
We demonstrate that self-supervised pretraining can improve the prediction of Alzheimer's Disease progression from brain MRI.
We also show that pretraining on extended (but not labeled) brain MRI data outperforms pretraining on natural images.
arXiv Detail & Related papers (2022-11-15T23:04:15Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.