Synthetic CT Skull Generation for Transcranial MR Imaging-Guided Focused
Ultrasound Interventions with Conditional Adversarial Networks
- URL: http://arxiv.org/abs/2202.10136v2
- Date: Tue, 22 Feb 2022 17:38:49 GMT
- Title: Synthetic CT Skull Generation for Transcranial MR Imaging-Guided Focused
Ultrasound Interventions with Conditional Adversarial Networks
- Authors: Han Liu, Michelle K. Sigona, Thomas J. Manuel, Li Min Chen, Charles F.
Caskey, Benoit M. Dawant
- Abstract summary: Transcranial MRI-guided focused ultrasound (TcMRgFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively under MRI guidance.
To accurately target ultrasound through the skull, the transmitted waves must constructively interfere at the target region.
- Score: 5.921808547303054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transcranial MRI-guided focused ultrasound (TcMRgFUS) is a therapeutic
ultrasound method that focuses sound through the skull to a small region
noninvasively under MRI guidance. It is clinically approved to thermally ablate
regions of the thalamus and is being explored for other therapies, such as
blood brain barrier opening and neuromodulation. To accurately target
ultrasound through the skull, the transmitted waves must constructively
interfere at the target region. However, heterogeneity of the sound speed,
density, and ultrasound attenuation in different individuals' skulls requires
patient-specific estimates of these parameters for optimal treatment planning.
CT imaging is currently the gold standard for estimating acoustic properties of
an individual skull during clinical procedures, but CT imaging exposes patients
to radiation and increases the overall number of imaging procedures required
for therapy. A method to estimate acoustic parameters in the skull without the
need for CT would be desirable. Here, we synthesized CT images from routinely
acquired T1-weighted MRI by using a 3D patch-based conditional generative
adversarial network and evaluated the performance of synthesized CT images for
treatment planning with transcranial focused ultrasound. We compared the
performance of synthetic CT to real CT images using Kranion and k-Wave acoustic
simulation. Our work demonstrates the feasibility of replacing real CT with the
MR-synthesized CT for TcMRgFUS planning.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Leveraging Multimodal CycleGAN for the Generation of Anatomically Accurate Synthetic CT Scans from MRIs [1.779948689352186]
We analyse the capabilities of different configurations of Deep Learning models to generate synthetic CT scans from MRI.
Several CycleGAN models were trained unsupervised to generate CT scans from different MRI modalities with and without contrast agents.
The results show how, depending on the input modalities, the models can have very different performances.
arXiv Detail & Related papers (2024-07-15T16:38:59Z) - Autonomous Path Planning for Intercostal Robotic Ultrasound Imaging Using Reinforcement Learning [45.5123007404575]
The US examination for thoracic application is still challenging due to the acoustic shadow cast by the subcutaneous rib cage.
We present a reinforcement learning approach for planning scanning paths between ribs to monitor changes in lesions on internal organs.
Experiments have been carried out on unseen CTs with randomly defined single or multiple scanning targets.
arXiv Detail & Related papers (2024-04-15T16:52:53Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Thoracic Cartilage Ultrasound-CT Registration using Dense Skeleton Graph [49.11220791279602]
It is challenging to accurately map planned paths from a generic atlas to individual patients, particularly for thoracic applications.
A graph-based non-rigid registration is proposed to enable transferring planned paths from the atlas to the current setup.
arXiv Detail & Related papers (2023-07-07T18:57:21Z) - Synthetic CT Generation from MRI using 3D Transformer-based Denoising
Diffusion Model [2.232713445482175]
Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning.
We propose an MRI-to-CT transformer-based denoising diffusion probabilistic model (MC-DDPM) to transform MRI into high-quality sCT.
arXiv Detail & Related papers (2023-05-31T00:32:00Z) - Self-supervised Physics-based Denoising for Computed Tomography [2.2758845733923687]
Computed Tomography (CT) imposes risk on the patients due to its inherent X-ray radiation.
Lowering the radiation dose reduces the health risks but leads to noisier measurements, which decreases the tissue contrast and causes artifacts in CT images.
Modern deep learning noise suppression methods alleviate the challenge but require low-noise-high-noise CT image pairs for training.
We introduce a new self-supervised approach for CT denoising Noise2NoiseTD-ANM that can be trained without the high-dose CT projection ground truth images.
arXiv Detail & Related papers (2022-11-01T20:58:50Z) - Evaluation of Synthetically Generated CT for use in Transcranial Focused
Ultrasound Procedures [5.921808547303054]
Transcranial focused ultrasound (tFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively and often under MRI guidance.
CT imaging is used to estimate the acoustic properties that vary between individual skulls to enable effective focusing during tFUS procedures.
Here, we synthesized CT images from routinely acquired T1-weighted MRI by using a 3D patch-based conditional generative adversarial network (cGAN)
We compared the performance of sCT to real CT (rCT) images for tFUS planning using Kranion and simulations using the acoustic toolbox,
arXiv Detail & Related papers (2022-10-26T15:15:24Z) - Multitask 3D CBCT-to-CT Translation and Organs-at-Risk Segmentation
Using Physics-Based Data Augmentation [4.3971310109651665]
In current clinical practice, noisy and artifact-ridden weekly cone-beam computed tomography (CBCT) images are only used for patient setup during radiotherapy.
Treatment planning is done once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures.
If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment and for deriving biomarkers for treatment response.
arXiv Detail & Related papers (2021-03-09T19:51:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.