Large-Language-Model Empowered Dose Volume Histogram Prediction for
Intensity Modulated Radiotherapy
- URL: http://arxiv.org/abs/2402.07167v1
- Date: Sun, 11 Feb 2024 11:24:09 GMT
- Title: Large-Language-Model Empowered Dose Volume Histogram Prediction for
Intensity Modulated Radiotherapy
- Authors: Zehao Dong, Yixin Chen, Hiram Gay, Yao Hao, Geoffrey D. Hugo, Pamela
Samson, Tianyu Zhao
- Abstract summary: We propose a pipeline to convert unstructured images to a structured graph consisting of image-patch nodes and dose nodes.
A novel Dose Graph Neural Network (DoseGNN) model is developed for predicting Dose-Volume histograms (DVHs) from the structured graph.
In this study, we introduced an online human-AI collaboration system as a practical implementation of the concept proposed for the automation of intensity-modulated radiotherapy (IMRT) planning.
- Score: 11.055104826451126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Treatment planning is currently a patient specific, time-consuming, and
resource demanding task in radiotherapy. Dose-volume histogram (DVH) prediction
plays a critical role in automating this process. The geometric relationship
between DVHs in radiotherapy plans and organs-at-risk (OAR) and planning target
volume (PTV) has been well established. This study explores the potential of
deep learning models for predicting DVHs using images and subsequent human
intervention facilitated by a large-language model (LLM) to enhance the
planning quality. We propose a pipeline to convert unstructured images to a
structured graph consisting of image-patch nodes and dose nodes. A novel Dose
Graph Neural Network (DoseGNN) model is developed for predicting DVHs from the
structured graph. The proposed DoseGNN is enhanced with the LLM to encode
massive knowledge from prescriptions and interactive instructions from
clinicians. In this study, we introduced an online human-AI collaboration
(OHAC) system as a practical implementation of the concept proposed for the
automation of intensity-modulated radiotherapy (IMRT) planning. In comparison
to the widely-employed DL models used in radiotherapy, DoseGNN achieved mean
square errors that were 80$\%$, 76$\%$ and 41.0$\%$ of those predicted by Swin
U-Net Transformer, 3D U-Net CNN and vanilla MLP, respectively. Moreover, the
LLM-empowered DoseGNN model facilitates seamless adjustment to treatment plans
through interaction with clinicians using natural language.
Related papers
- ARANet: Attention-based Residual Adversarial Network with Deep Supervision for Radiotherapy Dose Prediction of Cervical Cancer [5.737832138199829]
We propose an end-to-end Attentionbased Residual Adversarial Network with deep supervision, namely ARANet, to automatically predict the 3D dose distribution of cervical cancer.
Our proposed method is validated on an in-house dataset including 54 cervical cancer patients, and experimental results have demonstrated its obvious superiority compared to other state-of-the-art methods.
arXiv Detail & Related papers (2024-08-26T02:26:09Z) - DoseGNN: Improving the Performance of Deep Learning Models in Adaptive
Dose-Volume Histogram Prediction through Graph Neural Networks [15.101256852252936]
This paper extends recently disclosed research findings presented on AAPM (AAPM 65th Annual Meeting $&$ Exhibition)
The objective is to design efficient deep learning models for DVH prediction on general radiotherapy platform equipped with high performance CBCT system.
Deep learning models widely-adopted in DVH prediction task are evaluated on the novel radiotherapy platform.
arXiv Detail & Related papers (2024-02-02T00:28:19Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Customizing General-Purpose Foundation Models for Medical Report
Generation [64.31265734687182]
The scarcity of labelled medical image-report pairs presents great challenges in the development of deep and large-scale neural networks.
We propose customizing off-the-shelf general-purpose large-scale pre-trained models, i.e., foundation models (FMs) in computer vision and natural language processing.
arXiv Detail & Related papers (2023-06-09T03:02:36Z) - Unsupervised pre-training of graph transformers on patient population
graphs [48.02011627390706]
We propose a graph-transformer-based network to handle heterogeneous clinical data.
We show the benefit of our pre-training method in a self-supervised and a transfer learning setting.
arXiv Detail & Related papers (2022-07-21T16:59:09Z) - Unsupervised Pre-Training on Patient Population Graphs for Patient-Level
Predictions [48.02011627390706]
Pre-training has shown success in different areas of machine learning, such as Computer Vision (CV), Natural Language Processing (NLP) and medical imaging.
In this paper, we apply unsupervised pre-training to heterogeneous, multi-modal EHR data for patient outcome prediction.
We find that our proposed graph based pre-training method helps in modeling the data at a population level.
arXiv Detail & Related papers (2022-03-23T17:59:45Z) - Deep Learning 3D Dose Prediction for Conventional Lung IMRT Using
Consistent/Unbiased Automated Plans [3.4742750855568767]
In this work, we use consistent plans generated by our in-house automated planning system (named ECHO'') to train the DL model.
ECHO generates consistent/unbiased plans by solving large-scale constrained optimization problems sequentially.
The quality of the predictions was compared using different DVH metrics as well as dose-score and DVH-score, recently introduced by the AAPM knowledge-based planning grand challenge.
arXiv Detail & Related papers (2021-06-07T15:15:05Z) - Interactive Radiotherapy Target Delineation with 3D-Fused Context
Propagation [28.97228589610255]
Convolutional neural networks (CNNs) have been predominated on automatic 3D medical segmentation tasks.
We propose 3D-fused context propagation, which propagates any edited slice to the whole 3D volume.
arXiv Detail & Related papers (2020-12-12T17:46:20Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Learning Tumor Growth via Follow-Up Volume Prediction for Lung Nodules [15.069141581681016]
Follow-up serves an important role in the management of pulmonary nodules for lung cancer.
Recent deep learning studies using convolutional neural networks (CNNs) to predict the malignancy score of nodules, only provides clinicians with black-box predictions.
We propose a unified framework, named Nodule Follow-Up Prediction Network (NoFoNet), which predicts the growth of pulmonary nodules with high-quality visual appearances and accurate quantitative results.
arXiv Detail & Related papers (2020-06-24T17:18:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.