Denoising diffusion-based MRI to CT image translation enables automated
spinal segmentation
- URL: http://arxiv.org/abs/2308.09345v2
- Date: Tue, 14 Nov 2023 07:39:32 GMT
- Title: Denoising diffusion-based MRI to CT image translation enables automated
spinal segmentation
- Authors: Robert Graf, Joachim Schmitt, Sarah Schlaeger, Hendrik Kristian
M\"oller, Vasiliki Sideri-Lampretsa, Anjany Sekuboyina, Sandro Manuel Krieg,
Benedikt Wiestler, Bjoern Menze, Daniel Rueckert, Jan Stefan Kirschke
- Abstract summary: This retrospective study involved translating T1w and T2w MR image series into CT images in a total of n=263 pairs of CT/MR series.
Two landmarks per vertebra registration enabled paired image-to-image translation from MR to CT and outperformed all unpaired approaches.
- Score: 8.094450260464354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background: Automated segmentation of spinal MR images plays a vital role
both scientifically and clinically. However, accurately delineating posterior
spine structures presents challenges.
Methods: This retrospective study, approved by the ethical committee,
involved translating T1w and T2w MR image series into CT images in a total of
n=263 pairs of CT/MR series. Landmark-based registration was performed to align
image pairs. We compared 2D paired (Pix2Pix, denoising diffusion implicit
models (DDIM) image mode, DDIM noise mode) and unpaired (contrastive unpaired
translation, SynDiff) image-to-image translation using "peak signal to noise
ratio" (PSNR) as quality measure. A publicly available segmentation network
segmented the synthesized CT datasets, and Dice scores were evaluated on
in-house test sets and the "MRSpineSeg Challenge" volumes. The 2D findings were
extended to 3D Pix2Pix and DDIM.
Results: 2D paired methods and SynDiff exhibited similar translation
performance and Dice scores on paired data. DDIM image mode achieved the
highest image quality. SynDiff, Pix2Pix, and DDIM image mode demonstrated
similar Dice scores (0.77). For craniocaudal axis rotations, at least two
landmarks per vertebra were required for registration. The 3D translation
outperformed the 2D approach, resulting in improved Dice scores (0.80) and
anatomically accurate segmentations in a higher resolution than the original MR
image.
Conclusion: Two landmarks per vertebra registration enabled paired
image-to-image translation from MR to CT and outperformed all unpaired
approaches. The 3D techniques provided anatomically correct segmentations,
avoiding underprediction of small structures like the spinous process.
Related papers
- Slice-Consistent 3D Volumetric Brain CT-to-MRI Translation with 2D Brownian Bridge Diffusion Model [3.4248731707266264]
In neuroimaging, generally, brain CT is more cost-effective and accessible than MRI.
Medical image-to-image translation (I2I) serves as a promising solution.
This study is the first to achieve high-quality 3D medical I2I based only on a 2D DM with no extra architectural models.
arXiv Detail & Related papers (2024-07-06T12:13:36Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - FedMed-ATL: Misaligned Unpaired Brain Image Synthesis via Affine
Transform Loss [58.58979566599889]
We propose a novel self-supervised learning (FedMed) for brain image synthesis.
An affine transform loss (ATL) was formulated to make use of severely distorted images without violating privacy legislation.
The proposed method demonstrates advanced performance in both the quality of synthesized results under a severely misaligned and unpaired data setting.
arXiv Detail & Related papers (2022-01-29T13:45:39Z) - Bridging the gap between paired and unpaired medical image translation [12.28777883776042]
We introduce modified pix2pix models for tasks CT$rightarrow$MR and CT$rightarrow$CT, trained with unpaired CT and MR data, and MRCAT pairs generated from the MR scans.
The proposed modifications utilize the paired MR and MRCAT images to ensure good alignment between input and translated images, and unpaired CT images ensure the MR$rightarrow$CT model produces realistic-looking CT and CT$rightarrow$MR model works well with real CT as input.
arXiv Detail & Related papers (2021-10-15T23:15:12Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma
Segmentation in MRI Scans [22.60715394470069]
We devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model.
In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch.
arXiv Detail & Related papers (2020-10-20T20:42:52Z) - Multi-modal segmentation of 3D brain scans using neural networks [0.0]
Deep convolutional neural networks are trained to segment 3D MRI (MPRAGE, DWI, FLAIR) and CT scans.
segmentation quality is quantified using the Dice metric for a total of 27 anatomical structures.
arXiv Detail & Related papers (2020-08-11T09:13:54Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - A$^3$DSegNet: Anatomy-aware artifact disentanglement and segmentation
network for unpaired segmentation, artifact reduction, and modality
translation [18.500206499468902]
CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects.
There exists a wealth of artifact-free, high quality CT images with vertebra annotations.
This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations.
arXiv Detail & Related papers (2020-01-02T06:37:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.