ImUnity: a generalizable VAE-GAN solution for multicenter MR image
harmonization
- URL: http://arxiv.org/abs/2109.06756v1
- Date: Tue, 14 Sep 2021 15:21:19 GMT
- Title: ImUnity: a generalizable VAE-GAN solution for multicenter MR image
harmonization
- Authors: Stenzel Cackowski, Emmanuel L. Barbier, Michel Dojat, Thomas Christen
- Abstract summary: ImUnity is an original deep-learning model designed for efficient and flexible MR image harmonization.
A VAE-GAN network, coupled with a confusion module and an optional biological preservation module, uses multiple 2D-slices taken from different anatomical locations in each subject of the training database.
It eventually generates 'corrected' MR images that can be used for various multi-center population studies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: ImUnity is an original deep-learning model designed for efficient and
flexible MR image harmonization. A VAE-GAN network, coupled with a confusion
module and an optional biological preservation module, uses multiple 2D-slices
taken from different anatomical locations in each subject of the training
database, as well as image contrast transformations for its self-supervised
training. It eventually generates 'corrected' MR images that can be used for
various multi-center population studies. Using 3 open source databases (ABIDE,
OASIS and SRPBS), which contain MR images from multiple acquisition scanner
types or vendors and a large range of subjects ages, we show that ImUnity: (1)
outperforms state-of-the-art methods in terms of quality of images generated
using traveling subjects; (2) removes sites or scanner biases while improving
patients classification; (3) harmonizes data coming from new sites or scanners
without the need for an additional fine-tuning and (4) allows the selection of
multiple MR reconstructed images according to the desired applications. Tested
here on T1-weighted images, ImUnity could be used to harmonize other types of
medical images.
Related papers
- IGUANe: a 3D generalizable CycleGAN for multicenter harmonization of
brain MR images [0.0]
Deep learning methods for image translation have emerged as a solution for harmonizing MR images across sites.
In this study, we introduce IGUANe, an original 3D model that leverages the strengths of domain translation.
The model can be applied to any image, even from an unknown acquisition site.
arXiv Detail & Related papers (2024-02-05T17:38:49Z) - Partition-A-Medical-Image: Extracting Multiple Representative
Sub-regions for Few-shot Medical Image Segmentation [23.926487942901872]
Few-shot Medical Image (FSMIS) is a more promising solution for medical image segmentation tasks.
We present an approach to extract multiple representative sub-regions from a given support medical image.
We then introduce a novel Prototypical Representation Debiasing (PRD) module based on a two-way elimination mechanism.
arXiv Detail & Related papers (2023-09-20T09:31:57Z) - Single-subject Multi-contrast MRI Super-resolution via Implicit Neural
Representations [9.683341998041634]
Implicit Neural Representations (INR) proposed to learn two different contrasts of complementary views in a continuous spatial function.
Our model provides realistic super-resolution across different pairs of contrasts in our experiments with three datasets.
arXiv Detail & Related papers (2023-03-27T10:18:42Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Self-Supervised Multi-Modal Alignment for Whole Body Medical Imaging [70.52819168140113]
We use a dataset of over 20,000 subjects from the UK Biobank with both whole body Dixon technique magnetic resonance (MR) scans and also dual-energy x-ray absorptiometry (DXA) scans.
We introduce a multi-modal image-matching contrastive framework, that is able to learn to match different-modality scans of the same subject with high accuracy.
Without any adaption, we show that the correspondences learnt during this contrastive training step can be used to perform automatic cross-modal scan registration.
arXiv Detail & Related papers (2021-07-14T12:35:05Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Universal Model for Multi-Domain Medical Image Retrieval [88.67940265012638]
Medical Image Retrieval (MIR) helps doctors quickly find similar patients' data.
MIR is becoming increasingly helpful due to the wide use of digital imaging modalities.
However, the popularity of various digital imaging modalities in hospitals also poses several challenges to MIR.
arXiv Detail & Related papers (2020-07-14T23:22:04Z) - Multi-Domain Image Completion for Random Missing Input Data [17.53581223279953]
Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities.
Due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources.
We propose a general approach to complete the random missing domain(s) data in real applications.
arXiv Detail & Related papers (2020-07-10T16:38:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.