Interactive Radiotherapy Target Delineation with 3D-Fused Context
Propagation
- URL: http://arxiv.org/abs/2012.06873v1
- Date: Sat, 12 Dec 2020 17:46:20 GMT
- Title: Interactive Radiotherapy Target Delineation with 3D-Fused Context
Propagation
- Authors: Chun-Hung Chao, Hsien-Tzu Cheng, Tsung-Ying Ho, Le Lu, and Min Sun
- Abstract summary: Convolutional neural networks (CNNs) have been predominated on automatic 3D medical segmentation tasks.
We propose 3D-fused context propagation, which propagates any edited slice to the whole 3D volume.
- Score: 28.97228589610255
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Gross tumor volume (GTV) delineation on tomography medical imaging is crucial
for radiotherapy planning and cancer diagnosis. Convolutional neural networks
(CNNs) has been predominated on automatic 3D medical segmentation tasks,
including contouring the radiotherapy target given 3D CT volume. While CNNs may
provide feasible outcome, in clinical scenario, double-check and prediction
refinement by experts is still necessary because of CNNs' inconsistent
performance on unexpected patient cases. To provide experts an efficient way to
modify the CNN predictions without retrain the model, we propose 3D-fused
context propagation, which propagates any edited slice to the whole 3D volume.
By considering the high-level feature maps, the radiation oncologists would
only required to edit few slices to guide the correction and refine the whole
prediction volume. Specifically, we leverage the backpropagation for activation
technique to convey the user editing information backwardly to the latent space
and generate new prediction based on the updated and original feature. During
the interaction, our proposed approach reuses the extant extracted features and
does not alter the existing 3D CNN model architectures, avoiding the
perturbation on other predictions. The proposed method is evaluated on two
published radiotherapy target contouring datasets of nasopharyngeal and
esophageal cancer. The experimental results demonstrate that our proposed
method is able to further effectively improve the existing segmentation
prediction from different model architectures given oncologists' interactive
inputs.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - DoseGNN: Improving the Performance of Deep Learning Models in Adaptive
Dose-Volume Histogram Prediction through Graph Neural Networks [15.101256852252936]
This paper extends recently disclosed research findings presented on AAPM (AAPM 65th Annual Meeting $&$ Exhibition)
The objective is to design efficient deep learning models for DVH prediction on general radiotherapy platform equipped with high performance CBCT system.
Deep learning models widely-adopted in DVH prediction task are evaluated on the novel radiotherapy platform.
arXiv Detail & Related papers (2024-02-02T00:28:19Z) - The Impact of Loss Functions and Scene Representations for 3D/2D
Registration on Single-view Fluoroscopic X-ray Pose Estimation [1.758213853394712]
We first develop a differentiable projection rendering framework for the efficient computation of Digitally Reconstructed Radiographs (DRRs)
We then perform pose estimation by iterative descent using various candidate loss functions, that quantify the image discrepancy of the synthesized DRR with respect to the ground-truth fluoroscopic X-ray image.
Using the Mutual Information loss, a comprehensive evaluation of pose estimation performed on a tomographic X-ray dataset of 50 patients$'$ skulls shows that utilizing either discretized (CBCT) or neural (NeTT/mNeRF) scene representations in DiffProj leads to
arXiv Detail & Related papers (2023-08-01T01:12:29Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - A Novel Multi-scale Dilated 3D CNN for Epileptic Seizure Prediction [6.688907774518885]
A novel convolutional neural network (CNN) is proposed to analyze time, frequency, and channel information of electroencephalography (EEG) signals.
The model uses three-dimensional (3D) kernels to facilitate the feature extraction over the three dimensions.
The proposed CNN model is evaluated with the CHB-MIT EEG database, the experimental results indicate that our model outperforms the existing state-of-the-art.
arXiv Detail & Related papers (2021-05-05T07:13:53Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - Segmentation-free Estimation of Aortic Diameters from MRI Using Deep
Learning [2.231365407061881]
We propose a supervised deep learning method for the direct estimation of aortic diameters.
Our approach makes use of a 3D+2D convolutional neural network (CNN) that takes as input a 3D scan and outputs the aortic diameter at a given location.
Overall, the 3D+2D CNN achieved a mean absolute error between 2.2-2.4 mm depending on the considered aortic location.
arXiv Detail & Related papers (2020-09-09T18:28:00Z) - Ensemble Transfer Learning for the Prediction of Anti-Cancer Drug
Response [49.86828302591469]
In this paper, we apply transfer learning to the prediction of anti-cancer drug response.
We apply the classic transfer learning framework that trains a prediction model on the source dataset and refines it on the target dataset.
The ensemble transfer learning pipeline is implemented using LightGBM and two deep neural network (DNN) models with different architectures.
arXiv Detail & Related papers (2020-05-13T20:29:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.