Surrogate-free machine learning-based organ dose reconstruction for
pediatric abdominal radiotherapy
- URL: http://arxiv.org/abs/2002.07161v2
- Date: Wed, 10 Feb 2021 17:30:15 GMT
- Title: Surrogate-free machine learning-based organ dose reconstruction for
pediatric abdominal radiotherapy
- Authors: M. Virgolin, Z. Wang, B.V. Balgobind, I.W.E.M. van Dijk, J. Wiersma,
P.S. Kroon, G.O. Janssens, M. van Herk, D.C. Hodgson, L. Zadravec Zaletel,
C.R.N. Rasch, A. Bel, P.A.N. Bosman, T. Alderliesten
- Abstract summary: State-of-the-art methods achieve this by using 3D surrogate anatomies.
We present and validate a surrogate-free dose reconstruction method based on Machine Learning (ML)
Our novel, ML-based organ dose reconstruction method is not only accurate but also efficient, as the setup of a surrogate is no longer needed.
- Score: 0.19359975080269876
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To study radiotherapy-related adverse effects, detailed dose information (3D
distribution) is needed for accurate dose-effect modeling. For childhood cancer
survivors who underwent radiotherapy in the pre-CT era, only 2D radiographs
were acquired, thus 3D dose distributions must be reconstructed from limited
information. State-of-the-art methods achieve this by using 3D surrogate
anatomies. These can lack personalization and lead to coarse reconstructions.
We present and validate a surrogate-free dose reconstruction method based on
Machine Learning (ML). Abdominal planning CTs ($n$=142) of recently-treated
childhood cancer patients were gathered, their organs at risk were segmented,
and 300 artificial Wilms' tumor plans were sampled automatically. Each
artificial plan was automatically emulated on the 142 CTs, resulting in 42,600
3D dose distributions from which dose-volume metrics were derived. Anatomical
features were extracted from digitally reconstructed radiographs simulated from
the CTs to resemble historical radiographs. Further, patient and radiotherapy
plan features typically available from historical treatment records were
collected. An evolutionary ML algorithm was then used to link features to
dose-volume metrics. Besides 5-fold cross-validation, a further evaluation was
done on an independent dataset of five CTs each associated with two clinical
plans. Cross-validation resulted in Mean Absolute Errors (MAEs) $\leq$0.6 Gy
for organs completely inside or outside the field. For organs positioned at the
edge of the field, MAEs $\leq$1.7 Gy for D$_{mean}$, $\leq$2.9 Gy for
D$_{2cc}$, and $\leq$13% for V$_{5Gy}$ and V$_{10Gy}$, were obtained, without
systematic bias. Similar results were found for the independent dataset. Our
novel, ML-based organ dose reconstruction method is not only accurate but also
efficient, as the setup of a surrogate is no longer needed.
Related papers
- Swin UNETR++: Advancing Transformer-Based Dense Dose Prediction Towards Fully Automated Radiation Oncology Treatments [0.0]
We propose Swin UNETR++, that contains a lightweight 3D Dual Cross-Attention (DCA) module to capture the intra and inter-volume relationships of each patient's anatomy.
Our model was trained, validated, and tested on the Open Knowledge-Based Planning dataset.
arXiv Detail & Related papers (2023-11-11T13:52:59Z) - CT Reconstruction from Few Planar X-rays with Application towards
Low-resource Radiotherapy [20.353246282326943]
We propose a method to generate CT volumes from few (5) planar X-ray observations using a prior data distribution.
To focus the generation task on clinically-relevant features, our model can also leverage anatomical guidance during training.
Our method is better than recent sparse CT reconstruction baselines in terms of standard pixel and structure-level metrics.
arXiv Detail & Related papers (2023-08-04T01:17:57Z) - Weakly Supervised AI for Efficient Analysis of 3D Pathology Samples [6.381153836752796]
We present Modality-Agnostic Multiple instance learning for volumetric Block Analysis (MAMBA) for processing 3D tissue images.
With the 3D block-based approach, MAMBA achieves an area under the receiver operating characteristic curve (AUC) of 0.86 and 0.74, superior to 2D traditional single-slice-based prognostication.
Further analyses reveal that the incorporation of greater tissue volume improves prognostic performance and mitigates risk prediction variability from sampling bias.
arXiv Detail & Related papers (2023-07-27T14:48:02Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - 3D-Morphomics, Morphological Features on CT scans for lung nodule
malignancy diagnosis [8.728543774561405]
The study develops a predictive model of the pathological states based on morphological features (3D-morphomics) on Computed Tomography (CT) volumes.
An XGBoost supervised classifier is then trained and tested on the 3D-morphomics to predict the pathological states.
Using 3D-morphomics only, the classification model of lung nodules into malignant vs. benign achieves 0.964 of AUC.
arXiv Detail & Related papers (2022-07-27T23:50:47Z) - Federated Learning Enables Big Data for Rare Cancer Boundary Detection [98.5549882883963]
We present findings from the largest Federated ML study to-date, involving data from 71 healthcare institutions across 6 continents.
We generate an automatic tumor boundary detector for the rare disease of glioblastoma.
We demonstrate a 33% improvement over a publicly trained model to delineate the surgically targetable tumor, and 23% improvement over the tumor's entire extent.
arXiv Detail & Related papers (2022-04-22T17:27:00Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact
Context-encoding Variational Autoencoder [48.2010192865749]
Unsupervised anomaly detection (UAD) can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out of distribution samples.
This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA)
The proposed pipeline achieved a Dice score of 0.642$pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$pm$0.112 while detecting artificially induced anomalies.
arXiv Detail & Related papers (2022-01-31T14:27:35Z) - iPhantom: a framework for automated creation of individualized
computational phantoms and its application to CT organ dosimetry [58.943644554192936]
This study aims to develop and validate a novel framework, iPhantom, for automated creation of patient-specific phantoms or digital-twins.
The framework is applied to assess radiation dose to radiosensitive organs in CT imaging of individual patients.
iPhantom precisely predicted all organ locations with good accuracy of Dice Similarity Coefficients (DSC) >0.6 for anchor organs and DSC of 0.3-0.9 for all other organs.
arXiv Detail & Related papers (2020-08-20T01:50:49Z) - Patient-Specific Finetuning of Deep Learning Models for Adaptive
Radiotherapy in Prostate CT [1.3124513975412255]
Contouring of the target volume and Organs-At-Risk (OARs) is a crucial step in radiotherapy treatment planning.
In this work, we leverage personalized anatomical knowledge accumulated over the treatment sessions, to improve the segmentation accuracy of a pre-trained Convolution Neural Network (CNN)
We investigate a transfer learning approach, fine-tuning the baseline CNN model to a specific patient, based on imaging acquired in earlier treatment fractions.
arXiv Detail & Related papers (2020-02-17T12:53:37Z) - Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale
Chest Computed Tomography Volumes [64.21642241351857]
We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients.
We developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports.
We also developed a model for multi-organ, multi-disease classification of chest CT volumes.
arXiv Detail & Related papers (2020-02-12T00:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.