Slice-Consistent 3D Volumetric Brain CT-to-MRI Translation with 2D Brownian Bridge Diffusion Model
- URL: http://arxiv.org/abs/2407.05059v1
- Date: Sat, 6 Jul 2024 12:13:36 GMT
- Title: Slice-Consistent 3D Volumetric Brain CT-to-MRI Translation with 2D Brownian Bridge Diffusion Model
- Authors: Kyobin Choo, Youngjun Jun, Mijin Yun, Seong Jae Hwang,
- Abstract summary: In neuroimaging, generally, brain CT is more cost-effective and accessible than MRI.
Medical image-to-image translation (I2I) serves as a promising solution.
This study is the first to achieve high-quality 3D medical I2I based only on a 2D DM with no extra architectural models.
- Score: 3.4248731707266264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In neuroimaging, generally, brain CT is more cost-effective and accessible imaging option compared to MRI. Nevertheless, CT exhibits inferior soft-tissue contrast and higher noise levels, yielding less precise structural clarity. In response, leveraging more readily available CT to construct its counterpart MRI, namely, medical image-to-image translation (I2I), serves as a promising solution. Particularly, while diffusion models (DMs) have recently risen as a powerhouse, they also come with a few practical caveats for medical I2I. First, DMs' inherent stochasticity from random noise sampling cannot guarantee consistent MRI generation that faithfully reflects its CT. Second, for 3D volumetric images which are prevalent in medical imaging, naively using 2D DMs leads to slice inconsistency, e.g., abnormal structural and brightness changes. While 3D DMs do exist, significant training costs and data dependency bring hesitation. As a solution, we propose novel style key conditioning (SKC) and inter-slice trajectory alignment (ISTA) sampling for the 2D Brownian bridge diffusion model. Specifically, SKC ensures a consistent imaging style (e.g., contrast) across slices, and ISTA interconnects the independent sampling of each slice, deterministically achieving style and shape consistent 3D CT-to-MRI translation. To the best of our knowledge, this study is the first to achieve high-quality 3D medical I2I based only on a 2D DM with no extra architectural models. Our experimental results show superior 3D medical I2I than existing 2D and 3D baselines, using in-house CT-MRI dataset and BraTS2023 FLAIR-T1 MRI dataset.
Related papers
- Medical Slice Transformer: Improved Diagnosis and Explainability on 3D Medical Images with DINOv2 [1.6275928583134276]
We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis.
MST offers enhanced diagnostic accuracy and explainability compared to convolutional neural networks.
arXiv Detail & Related papers (2024-11-24T12:11:11Z) - 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - Accurate Patient Alignment without Unnecessary Imaging Dose via Synthesizing Patient-specific 3D CT Images from 2D kV Images [10.538839084727975]
Tumor visibility is constrained due to the projection of patient's anatomy onto a 2D plane.
In treatment room with 3D-OBI such as cone beam CT(CBCT), the field of view(FOV) of CBCT is limited with unnecessarily high imaging dose.
We propose a dual-models framework built with hierarchical ViT blocks to reconstruct 3D CT from kV images obtained at the treatment position.
arXiv Detail & Related papers (2024-04-01T19:55:03Z) - Denoising diffusion-based MRI to CT image translation enables automated
spinal segmentation [8.094450260464354]
This retrospective study involved translating T1w and T2w MR image series into CT images in a total of n=263 pairs of CT/MR series.
Two landmarks per vertebra registration enabled paired image-to-image translation from MR to CT and outperformed all unpaired approaches.
arXiv Detail & Related papers (2023-08-18T07:07:15Z) - Two-and-a-half Order Score-based Model for Solving 3D Ill-posed Inverse
Problems [7.074380879971194]
We propose a novel two-and-a-half order score-based model (TOSM) for 3D volumetric reconstruction.
During the training phase, our TOSM learns data distributions in 2D space, which reduces the complexity of training.
In the reconstruction phase, the TOSM updates the data distribution in 3D space, utilizing complementary scores along three directions.
arXiv Detail & Related papers (2023-08-16T17:07:40Z) - Make-A-Volume: Leveraging Latent Diffusion Models for Cross-Modality 3D
Brain MRI Synthesis [35.45013834475523]
Cross-modality medical image synthesis is a critical topic and has the potential to facilitate numerous applications in the medical imaging field.
Most current medical image synthesis methods rely on generative adversarial networks and suffer from notorious mode collapse and unstable training.
We introduce a new paradigm for volumetric medical data synthesis by leveraging 2D backbones and present a diffusion-based framework, Make-A-Volume.
arXiv Detail & Related papers (2023-07-19T16:01:09Z) - Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models [52.529394863331326]
We propose a novel approach using two perpendicular pre-trained 2D diffusion models to solve the 3D inverse problem.
Our method is highly effective for 3D medical image reconstruction tasks, including MRI Z-axis super-resolution, compressed sensing MRI, and sparse-view CT.
arXiv Detail & Related papers (2023-03-15T08:28:06Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.