Evaluation of Synthetically Generated CT for use in Transcranial Focused
Ultrasound Procedures
- URL: http://arxiv.org/abs/2210.14775v1
- Date: Wed, 26 Oct 2022 15:15:24 GMT
- Title: Evaluation of Synthetically Generated CT for use in Transcranial Focused
Ultrasound Procedures
- Authors: Han Liu, Michelle K. Sigona, Thomas J. Manuel, Li Min Chen, Benoit M.
Dawant, Charles F. Caskey
- Abstract summary: Transcranial focused ultrasound (tFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively and often under MRI guidance.
CT imaging is used to estimate the acoustic properties that vary between individual skulls to enable effective focusing during tFUS procedures.
Here, we synthesized CT images from routinely acquired T1-weighted MRI by using a 3D patch-based conditional generative adversarial network (cGAN)
We compared the performance of sCT to real CT (rCT) images for tFUS planning using Kranion and simulations using the acoustic toolbox,
- Score: 5.921808547303054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transcranial focused ultrasound (tFUS) is a therapeutic ultrasound method
that focuses sound through the skull to a small region noninvasively and often
under MRI guidance. CT imaging is used to estimate the acoustic properties that
vary between individual skulls to enable effective focusing during tFUS
procedures, exposing patients to potentially harmful radiation. A method to
estimate acoustic parameters in the skull without the need for CT would be
desirable. Here, we synthesized CT images from routinely acquired T1-weighted
MRI by using a 3D patch-based conditional generative adversarial network (cGAN)
and evaluated the performance of synthesized CT (sCT) images for treatment
planning with tFUS. We compared the performance of sCT to real CT (rCT) images
for tFUS planning using Kranion and simulations using the acoustic toolbox,
k-Wave. Simulations were performed for 3 tFUS scenarios: 1) no aberration
correction, 2) correction with phases calculated from Kranion, and 3) phase
shifts calculated from time-reversal. From Kranion, skull density ratio, skull
thickness, and number of active elements between rCT and sCT had Pearson's
Correlation Coefficients of 0.94, 0.92, and 0.98, respectively. Among 20
targets, differences in simulated peak pressure between rCT and sCT were
largest without phase correction (12.4$\pm$8.1%) and smallest with Kranion
phases (7.3$\pm$6.0%). The distance between peak focal locations between rCT
and sCT was less than 1.3 mm for all simulation cases. Real and synthetically
generated skulls had comparable image similarity, skull measurements, and
acoustic simulation metrics. Our work demonstrates the feasibility of replacing
real CTs with the MR-synthesized CT for tFUS planning. Source code and a docker
image with the trained model are available at
https://github.com/han-liu/SynCT_TcMRgFUS
Related papers
- DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays [41.393567374399524]
We propose DiffuX2CT, which models CT reconstruction from ultra-sparse X-rays as a conditional diffusion process.
By doing so, DiffuX2CT achieves structure-controllable reconstruction, which enables 3D structural information to be recovered from 2D X-rays.
As an extra contribution, we collect a real-world lumbar CT dataset, called LumbarV, as a new benchmark to verify the clinical significance and performance of CT reconstruction from X-rays.
arXiv Detail & Related papers (2024-07-18T14:20:04Z) - Energy-Guided Diffusion Model for CBCT-to-CT Synthesis [8.888473799320593]
Cone Beam CT (CBCT) plays a crucial role in Adaptive Radiation Therapy (ART) by accurately providing radiation treatment when organ anatomy changes occur.
CBCT images suffer from scatter noise and artifacts, making relying solely on CBCT for precise dose calculation and accurate tissue localization challenging.
We propose an energy-guided diffusion model (EGDiff) and conduct experiments on a chest tumor dataset to generate synthetic CT (sCT) from CBCT.
arXiv Detail & Related papers (2023-08-07T07:23:43Z) - Improved Prognostic Prediction of Pancreatic Cancer Using Multi-Phase CT
by Integrating Neural Distance and Texture-Aware Transformer [37.55853672333369]
This paper proposes a novel learnable neural distance that describes the precise relationship between the tumor and vessels in CT images of different patients.
The developed risk marker was the strongest predictor of overall survival among preoperative factors.
arXiv Detail & Related papers (2023-08-01T12:46:02Z) - Synthetic CT Generation from MRI using 3D Transformer-based Denoising
Diffusion Model [2.232713445482175]
Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning.
We propose an MRI-to-CT transformer-based denoising diffusion probabilistic model (MC-DDPM) to transform MRI into high-quality sCT.
arXiv Detail & Related papers (2023-05-31T00:32:00Z) - Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating
Neural Field [37.86878619100209]
NeRF has widely received attention in Sparse-View (SV) CT reconstruction problems as a self-supervised deep learning framework.
Existing NeRF-based SVCT methods strictly suppose there is completely no relative motion during the CT acquisition.
This work proposes a self-calibrating neural field that recovers the artifacts-free image from the rigid motion-corrupted SV measurement.
arXiv Detail & Related papers (2022-10-23T13:55:07Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Classical and learned MR to pseudo-CT mappings for accurate transcranial
ultrasound simulation [0.33598755777055367]
Three methods for generating pseudo-CT images from magnetic resonance (MR) images were compared.
Ultrasound simulations were also performed using the generated pseudo-CT images and compared to simulations based on CT.
arXiv Detail & Related papers (2022-06-30T17:33:44Z) - Synthetic CT Skull Generation for Transcranial MR Imaging-Guided Focused
Ultrasound Interventions with Conditional Adversarial Networks [5.921808547303054]
Transcranial MRI-guided focused ultrasound (TcMRgFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively under MRI guidance.
To accurately target ultrasound through the skull, the transmitted waves must constructively interfere at the target region.
arXiv Detail & Related papers (2022-02-21T11:34:29Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Multitask 3D CBCT-to-CT Translation and Organs-at-Risk Segmentation
Using Physics-Based Data Augmentation [4.3971310109651665]
In current clinical practice, noisy and artifact-ridden weekly cone-beam computed tomography (CBCT) images are only used for patient setup during radiotherapy.
Treatment planning is done once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures.
If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment and for deriving biomarkers for treatment response.
arXiv Detail & Related papers (2021-03-09T19:51:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.