Improve Cross-Modality Segmentation by Treating MRI Images as Inverted CT Scans
- URL: http://arxiv.org/abs/2405.03713v1
- Date: Sat, 4 May 2024 14:02:52 GMT
- Title: Improve Cross-Modality Segmentation by Treating MRI Images as Inverted CT Scans
- Authors: Hartmut Häntze, Lina Xu, Leonhard Donle, Felix J. Dorfner, Alessa Hering, Lisa C. Adams, Keno K. Bressem,
- Abstract summary: We show that a simple image inversion technique can significantly improve the segmentation quality of CT segmentation models on MRI data.
Image inversion is straightforward to implement and does not require dedicated graphics processing units (GPUs)
- Score: 0.4867169878981935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computed tomography (CT) segmentation models frequently include classes that are not currently supported by magnetic resonance imaging (MRI) segmentation models. In this study, we show that a simple image inversion technique can significantly improve the segmentation quality of CT segmentation models on MRI data, by using the TotalSegmentator model, applied to T1-weighted MRI images, as example. Image inversion is straightforward to implement and does not require dedicated graphics processing units (GPUs), thus providing a quick alternative to complex deep modality-transfer models for generating segmentation masks for MRI data.
Related papers
- ContextMRI: Enhancing Compressed Sensing MRI through Metadata Conditioning [51.26601171361753]
We propose ContextMRI, a text-conditioned diffusion model for MRI that integrates granular metadata into the reconstruction process.
We show that increasing the fidelity of metadata, ranging from slice location and contrast to patient age, sex, and pathology, systematically boosts reconstruction performance.
arXiv Detail & Related papers (2025-01-08T05:15:43Z) - XLSTM-HVED: Cross-Modal Brain Tumor Segmentation and MRI Reconstruction Method Using Vision XLSTM and Heteromodal Variational Encoder-Decoder [9.141615533517719]
We introduce the XLSTM-HVED model, which integrates a heteromodal encoder-decoder framework with the Vision XLSTM module to reconstruct missing MRI modalities.
Key innovation of our approach is the Self-Attention Variational (SAVE) module, which improves the integration of modal features.
Our experiments using the BraTS 2024 dataset demonstrate that our model significantly outperforms existing advanced methods in handling cases where modalities are missing.
arXiv Detail & Related papers (2024-12-09T09:04:02Z) - MRGen: Diffusion-based Controllable Data Engine for MRI Segmentation towards Unannotated Modalities [59.61465292965639]
This paper investigates a new paradigm for leveraging generative models in medical applications.
We propose a diffusion-based data engine, termed MRGen, which enables generation conditioned on text prompts and masks.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - Domain-Agnostic Stroke Lesion Segmentation Using Physics-Constrained Synthetic Data [0.15749416770494706]
We propose two novel approaches using synthetic quantitative MRI (qMRI) images to enhance the robustness and generalisability of segmentation models.
We trained a qMRI estimation model to predict qMRI maps from MPRAGE images, which were used to simulate diverse MRI sequences for segmentation training.
A second approach built upon prior work in synthetic data for stroke lesion segmentation, generating qMRI maps from a dataset of tissue labels.
arXiv Detail & Related papers (2024-12-04T13:52:05Z) - An Ensemble Approach for Brain Tumor Segmentation and Synthesis [0.12777007405746044]
The integration of machine learning in magnetic resonance imaging (MRI) is proving to be incredibly effective.
Deep learning models utilize multiple layers of processing to capture intricate details of complex data.
We propose a deep learning framework that ensembles state-of-the-art architectures to achieve accurate segmentation.
arXiv Detail & Related papers (2024-11-26T17:28:51Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Reference-based Magnetic Resonance Image Reconstruction Using Texture
Transforme [86.6394254676369]
We propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction.
We formulate the under-sampled data and reference data as queries and keys in a transformer.
The proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance.
arXiv Detail & Related papers (2021-11-18T03:06:25Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Latent Correlation Representation Learning for Brain Tumor Segmentation
with Missing MRI Modalities [2.867517731896504]
Accurately segmenting brain tumor from MR images is the key to clinical diagnostics and treatment planning.
It's common to miss some imaging modalities in clinical practice.
We present a novel brain tumor segmentation algorithm with missing modalities.
arXiv Detail & Related papers (2021-04-13T14:21:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.