Accurate Patient Alignment without Unnecessary Imaging Dose via Synthesizing Patient-specific 3D CT Images from 2D kV Images
- URL: http://arxiv.org/abs/2405.19338v1
- Date: Mon, 1 Apr 2024 19:55:03 GMT
- Title: Accurate Patient Alignment without Unnecessary Imaging Dose via Synthesizing Patient-specific 3D CT Images from 2D kV Images
- Authors: Yuzhen Ding, Jason M. Holmes, Hongying Feng, Baoxin Li, Lisa A. McGee, Jean-Claude M. Rwigema, Sujay A. Vora, Daniel J. Ma, Robert L. Foote, Samir H. Patel, Wei Liu,
- Abstract summary: Tumor visibility is constrained due to the projection of patient's anatomy onto a 2D plane.
In treatment room with 3D-OBI such as cone beam CT(CBCT), the field of view(FOV) of CBCT is limited with unnecessarily high imaging dose.
We propose a dual-models framework built with hierarchical ViT blocks to reconstruct 3D CT from kV images obtained at the treatment position.
- Score: 10.538839084727975
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In radiotherapy, 2D orthogonally projected kV images are used for patient alignment when 3D-on-board imaging(OBI) unavailable. But tumor visibility is constrained due to the projection of patient's anatomy onto a 2D plane, potentially leading to substantial setup errors. In treatment room with 3D-OBI such as cone beam CT(CBCT), the field of view(FOV) of CBCT is limited with unnecessarily high imaging dose, thus unfavorable for pediatric patients. A solution to this dilemma is to reconstruct 3D CT from kV images obtained at the treatment position. Here, we propose a dual-models framework built with hierarchical ViT blocks. Unlike a proof-of-concept approach, our framework considers kV images as the solo input and can synthesize accurate, full-size 3D CT in real time(within milliseconds). We demonstrate the feasibility of the proposed approach on 10 patients with head and neck (H&N) cancer using image quality(MAE: <45HU), dosimetrical accuracy(Gamma passing rate (2%/2mm/10%)>97%) and patient position uncertainty(shift error: <0.4mm). The proposed framework can generate accurate 3D CT faithfully mirroring real-time patient position, thus significantly improving patient setup accuracy, keeping imaging dose minimum, and maintaining treatment veracity.
Related papers
- Medical Slice Transformer: Improved Diagnosis and Explainability on 3D Medical Images with DINOv2 [1.6275928583134276]
We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis.
MST offers enhanced diagnostic accuracy and explainability compared to convolutional neural networks.
arXiv Detail & Related papers (2024-11-24T12:11:11Z) - OCTCube: A 3D foundation model for optical coherence tomography that improves cross-dataset, cross-disease, cross-device and cross-modality analysis [11.346324975034051]
OCTCube is a 3D foundation model pre-trained on 26,605 3D OCT volumes encompassing 1.62 million 2D OCT images.
It outperforms 2D models when predicting 8 retinal diseases in both inductive and cross-dataset settings.
It also shows superior performance on cross-device prediction and when predicting systemic diseases, such as diabetes and hypertension.
arXiv Detail & Related papers (2024-08-20T22:55:19Z) - Slice-Consistent 3D Volumetric Brain CT-to-MRI Translation with 2D Brownian Bridge Diffusion Model [3.4248731707266264]
In neuroimaging, generally, brain CT is more cost-effective and accessible than MRI.
Medical image-to-image translation (I2I) serves as a promising solution.
This study is the first to achieve high-quality 3D medical I2I based only on a 2D DM with no extra architectural models.
arXiv Detail & Related papers (2024-07-06T12:13:36Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Deep-Learning-based Fast and Accurate 3D CT Deformable Image
Registration in Lung Cancer [14.31661366393592]
The visibility of the tumor is limited since the patient's 3D anatomy is projected onto a 2D plane.
A solution is to reconstruct the 3D CT image from the kV images obtained at the treatment isocenter in the treatment position.
A patient-specific vision-transformer-based network was developed and shown to be accurate and efficient.
arXiv Detail & Related papers (2023-04-21T17:18:21Z) - Perspective Projection-Based 3D CT Reconstruction from Biplanar X-rays [32.98966469644061]
We propose PerX2CT, a novel CT reconstruction framework from X-ray.
Our proposed method provides a different combination of features for each coordinate which implicitly allows the model to obtain information about the 3D location.
arXiv Detail & Related papers (2023-03-09T14:45:25Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Convolutional-LSTM for Multi-Image to Single Output Medical Prediction [55.41644538483948]
A common scenario in developing countries is to have the volume metadata lost due multiple reasons.
It is possible to get a multi-image to single diagnostic model which mimics human doctor diagnostic process.
arXiv Detail & Related papers (2020-10-20T04:30:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.