Classical and learned MR to pseudo-CT mappings for accurate transcranial
ultrasound simulation
- URL: http://arxiv.org/abs/2206.15441v1
- Date: Thu, 30 Jun 2022 17:33:44 GMT
- Title: Classical and learned MR to pseudo-CT mappings for accurate transcranial
ultrasound simulation
- Authors: Maria Miscouridou, Jos\'e A. Pineda-Pardo, Charlotte J. Stagg, Bradley
E. Treeby, Antonio Stanziola
- Abstract summary: Three methods for generating pseudo-CT images from magnetic resonance (MR) images were compared.
Ultrasound simulations were also performed using the generated pseudo-CT images and compared to simulations based on CT.
- Score: 0.33598755777055367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model-based treatment planning for transcranial ultrasound therapy typically
involves mapping the acoustic properties of the skull from an x-ray computed
tomography (CT) image of the head. Here, three methods for generating pseudo-CT
images from magnetic resonance (MR) images were compared as an alternative to
CT. A convolutional neural network (U-Net) was trained on paired MR-CT images
to generate pseudo-CT images from either T1-weighted or zero-echo time (ZTE) MR
images (denoted tCT and zCT, respectively). A direct mapping from ZTE to
pseudo-CT was also implemented (denoted cCT). When comparing the pseudo-CT and
ground truth CT images for the test set, the mean absolute error was 133, 83,
and 145 Hounsfield units (HU) across the whole head, and 398, 222, and 336 HU
within the skull for the tCT, zCT, and cCT images, respectively. Ultrasound
simulations were also performed using the generated pseudo-CT images and
compared to simulations based on CT. An annular array transducer was used
targeting the visual or motor cortex. The mean differences in the simulated
focal pressure, focal position, and focal volume were 9.9%, 1.5 mm, and 15.1%
for simulations based on the tCT images, 5.7%, 0.6 mm, and 5.7% for the zCT,
and 6.7%, 0.9 mm, and 12.1% for the cCT. The improved results for images mapped
from ZTE highlight the advantage of using imaging sequences which improve
contrast of the skull bone. Overall, these results demonstrate that acoustic
simulations based on MR images can give comparable accuracy to those based on
CT.
Related papers
- TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Cycle-consistent Generative Adversarial Network Synthetic CT for MR-only
Adaptive Radiation Therapy on MR-Linac [0.0]
Cycle-GAN model was trained with MRI and CT scan slices from MR-LINAC treatments, generating sCT volumes.
Dosimetric evaluations indicated minimal differences between sCTs and dCTs, with sCTs showing better air-bubble reconstruction.
arXiv Detail & Related papers (2023-12-03T04:38:17Z) - Energy-Guided Diffusion Model for CBCT-to-CT Synthesis [8.888473799320593]
Cone Beam CT (CBCT) plays a crucial role in Adaptive Radiation Therapy (ART) by accurately providing radiation treatment when organ anatomy changes occur.
CBCT images suffer from scatter noise and artifacts, making relying solely on CBCT for precise dose calculation and accurate tissue localization challenging.
We propose an energy-guided diffusion model (EGDiff) and conduct experiments on a chest tumor dataset to generate synthetic CT (sCT) from CBCT.
arXiv Detail & Related papers (2023-08-07T07:23:43Z) - Synthetic CT Generation from MRI using 3D Transformer-based Denoising
Diffusion Model [2.232713445482175]
Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning.
We propose an MRI-to-CT transformer-based denoising diffusion probabilistic model (MC-DDPM) to transform MRI into high-quality sCT.
arXiv Detail & Related papers (2023-05-31T00:32:00Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Evaluation of Synthetically Generated CT for use in Transcranial Focused
Ultrasound Procedures [5.921808547303054]
Transcranial focused ultrasound (tFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively and often under MRI guidance.
CT imaging is used to estimate the acoustic properties that vary between individual skulls to enable effective focusing during tFUS procedures.
Here, we synthesized CT images from routinely acquired T1-weighted MRI by using a 3D patch-based conditional generative adversarial network (cGAN)
We compared the performance of sCT to real CT (rCT) images for tFUS planning using Kranion and simulations using the acoustic toolbox,
arXiv Detail & Related papers (2022-10-26T15:15:24Z) - Synthetic CT Skull Generation for Transcranial MR Imaging-Guided Focused
Ultrasound Interventions with Conditional Adversarial Networks [5.921808547303054]
Transcranial MRI-guided focused ultrasound (TcMRgFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively under MRI guidance.
To accurately target ultrasound through the skull, the transmitted waves must constructively interfere at the target region.
arXiv Detail & Related papers (2022-02-21T11:34:29Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for
Detection of COVID-19 Cases from Chest CT Images [75.74756992992147]
We introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images.
We also introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation.
arXiv Detail & Related papers (2020-09-08T15:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.