MEDIMP: 3D Medical Images with clinical Prompts from limited tabular
data for renal transplantation
- URL: http://arxiv.org/abs/2303.12445v2
- Date: Sat, 29 Apr 2023 15:42:49 GMT
- Title: MEDIMP: 3D Medical Images with clinical Prompts from limited tabular
data for renal transplantation
- Authors: Leo Milecki, Vicky Kalogeiton, Sylvain Bodard, Dany Anglicheau,
Jean-Michel Correas, Marc-Olivier Timsit, Maria Vakalopoulou
- Abstract summary: We propose MEDIMP, a model to learn meaningful multi-modal representations of renal transplant Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE MRI)
We propose a framework that generates medical prompts using automatic textual data augmentations from Large Language Models (LLMs)
Our goal is to learn meaningful representations of renal transplant DCE MRI, interesting for the prognosis of the transplant or patient status (2, 3, and 4 years after the transplant), fully exploiting the limited available multi-modal data most efficiently.
- Score: 4.377239465814404
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Renal transplantation emerges as the most effective solution for end-stage
renal disease. Occurring from complex causes, a substantial risk of transplant
chronic dysfunction persists and may lead to graft loss. Medical imaging plays
a substantial role in renal transplant monitoring in clinical practice.
However, graft supervision is multi-disciplinary, notably joining nephrology,
urology, and radiology, while identifying robust biomarkers from such
high-dimensional and complex data for prognosis is challenging. In this work,
taking inspiration from the recent success of Large Language Models (LLMs), we
propose MEDIMP -- Medical Images with clinical Prompts -- a model to learn
meaningful multi-modal representations of renal transplant Dynamic
Contrast-Enhanced Magnetic Resonance Imaging (DCE MRI) by incorporating
structural clinicobiological data after translating them into text prompts.
MEDIMP is based on contrastive learning from joint text-image paired embeddings
to perform this challenging task. Moreover, we propose a framework that
generates medical prompts using automatic textual data augmentations from LLMs.
Our goal is to learn meaningful manifolds of renal transplant DCE MRI,
interesting for the prognosis of the transplant or patient status (2, 3, and 4
years after the transplant), fully exploiting the limited available multi-modal
data most efficiently. Extensive experiments and comparisons with other renal
transplant representation learning methods with limited data prove the
effectiveness of MEDIMP in a relevant clinical setting, giving new directions
toward medical prompts. Our code is available at
https://github.com/leomlck/MEDIMP.
Related papers
- GFE-Mamba: Mamba-based AD Multi-modal Progression Assessment via Generative Feature Extraction from MCI [5.355943545567233]
Alzheimer's Disease (AD) is an irreversible neurodegenerative disorder that often progresses from Mild Cognitive Impairment (MCI)
We introduce GFE-Mamba, a classifier based on Generative Feature Extraction (GFE)
It integrates data from assessment scales, MRI, and PET, enabling deeper multimodal fusion.
Our experimental results demonstrate that the GFE-Mamba model is effective in predicting the conversion from MCI to AD.
arXiv Detail & Related papers (2024-07-22T15:22:33Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Enhanced Synthetic MRI Generation from CT Scans Using CycleGAN with
Feature Extraction [3.2088888904556123]
We propose an approach for enhanced monomodal registration using synthetic MRI images from CT scans.
Our methodology shows promising results, outperforming several state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T16:39:56Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - Multi-scale Multi-site Renal Microvascular Structures Segmentation for
Whole Slide Imaging in Renal Pathology [4.743463035587953]
We present Omni-Seg, a novel single dynamic network method that capitalizes on multi-site, multi-scale training data.
We train a singular deep network using images from two datasets, HuBMAP and NEPTUNE.
Our proposed method provides renal pathologists with a powerful computational tool for the quantitative analysis of renal microvascular structures.
arXiv Detail & Related papers (2023-08-10T16:26:03Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Multi-Task Learning for Post-transplant Cause of Death Analysis: A Case
Study on Liver Transplant [65.85767739748901]
Post-transplant cause of death provides powerful tool for clinical decision making.
Traditional methods like Model for End-stage Liver Disease (MELD) score and conventional machine learning (ML) methods are limited in CoD analysis.
We propose a novel framework called CoD-MTL leveraging multi-task learning to model the semantic relationships between various CoD prediction tasks jointly.
arXiv Detail & Related papers (2023-03-30T01:31:49Z) - DIGEST: Deeply supervIsed knowledGE tranSfer neTwork learning for brain
tumor segmentation with incomplete multi-modal MRI scans [16.93394669748461]
Brain tumor segmentation based on multi-modal magnetic resonance imaging (MRI) plays a pivotal role in assisting brain cancer diagnosis, treatment, and postoperative evaluations.
Despite the achieved inspiring performance by existing automatic segmentation methods, multi-modal MRI data are still unavailable in real-world clinical applications.
We propose a Deeply supervIsed knowledGE tranSfer neTwork (DIGEST), which achieves accurate brain tumor segmentation under different modality-missing scenarios.
arXiv Detail & Related papers (2022-11-15T09:01:14Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Multi-institutional Collaborations for Improving Deep Learning-based
Magnetic Resonance Image Reconstruction Using Federated Learning [62.17532253489087]
Deep learning methods have been shown to produce superior performance on MR image reconstruction.
These methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations.
We propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients' privacy.
arXiv Detail & Related papers (2021-03-03T03:04:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.