A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning
- URL: http://arxiv.org/abs/2203.00624v1
- Date: Tue, 1 Mar 2022 17:08:41 GMT
- Title: A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning
- Authors: Fernando Navarro, Guido Sasahara, Suprosanna Shit, Ivan Ezhov, Jan C.
Peeken, Stephanie E. Combs and Bjoern H. Menze
- Abstract summary: Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
- Score: 56.52933974838905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic localization and segmentation of organs-at-risk (OAR) in CT are
essential pre-processing steps in medical image analysis tasks, such as
radiation therapy planning. For instance, the segmentation of OAR surrounding
tumors enables the maximization of radiation to the tumor area without
compromising the healthy tissues. However, the current medical workflow
requires manual delineation of OAR, which is prone to errors and is
annotator-dependent. In this work, we aim to introduce a unified 3D pipeline
for OAR localization-segmentation rather than novel localization or
segmentation architectures. To the best of our knowledge, our proposed
framework fully enables the exploitation of 3D context information inherent in
medical imaging. In the first step, a 3D multi-variate regression network
predicts organs' centroids and bounding boxes. Secondly, 3D organ-specific
segmentation networks are leveraged to generate a multi-organ segmentation map.
Our method achieved an overall Dice score of $0.9260\pm 0.18 \%$ on the
VISCERAL dataset containing CT scans with varying fields of view and multiple
organs.
Related papers
- Improving 3D Medical Image Segmentation at Boundary Regions using Local Self-attention and Global Volume Mixing [14.0825980706386]
Volumetric medical image segmentation is a fundamental problem in medical image analysis where the objective is to accurately classify a given 3D volumetric medical image with voxel-level precision.
In this work, we propose a novel hierarchical encoder-decoder-based framework that strives to explicitly capture the local and global dependencies for 3D medical image segmentation.
The proposed framework exploits local volume-based self-attention to encode the local dependencies at high resolution and introduces a novel volumetric-mixer to capture the global dependencies at low-resolution feature representations.
arXiv Detail & Related papers (2024-10-20T11:08:38Z) - 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - FocusNetv2: Imbalanced Large and Small Organ Segmentation with
Adversarial Shape Constraint for Head and Neck CT Images [82.48587399026319]
delineation of organs-at-risk (OARs) is a vital step in radiotherapy treatment planning to avoid damage to healthy organs.
We propose a novel two-stage deep neural network, FocusNetv2, to solve this challenging problem by automatically locating, ROI-pooling, and segmenting small organs.
In addition to our original FocusNet, we employ a novel adversarial shape constraint on small organs to ensure the consistency between estimated small-organ shapes and organ shape prior knowledge.
arXiv Detail & Related papers (2021-04-05T04:45:31Z) - Unsupervised Region-based Anomaly Detection in Brain MRI with
Adversarial Image Inpainting [4.019851137611981]
This paper proposes a fully automatic, unsupervised inpainting-based brain tumour segmentation system for T1-weighted MRI.
First, a deep convolutional neural network (DCNN) is trained to reconstruct missing healthy brain regions. Then, anomalous regions are determined by identifying areas of highest reconstruction loss.
We show the proposed system is able to segment various sized and abstract tumours and achieves a mean and standard deviation Dice score of 0.771 and 0.176, respectively.
arXiv Detail & Related papers (2020-10-05T12:13:44Z) - Deep Reinforcement Learning for Organ Localization in CT [59.23083161858951]
We propose a deep reinforcement learning approach for organ localization in CT.
In this work, an artificial agent is actively self-taught to localize organs in CT by learning from its asserts and mistakes.
Our method can use as a plug-and-play module for localizing any organ of interest.
arXiv Detail & Related papers (2020-05-11T10:06:13Z) - AttentionAnatomy: A unified framework for whole-body organs at risk
segmentation using multiple partially annotated datasets [30.23917416966188]
Organs-at-risk (OAR) delineation in computed tomography (CT) is an important step in Radiation Therapy (RT) planning.
Our proposed end-to-end convolutional neural network model, called textbfAttentionAnatomy, can be jointly trained with three partially annotated datasets.
Experimental results of our proposed framework presented significant improvements in both Sorensen-Dice coefficient (DSC) and 95% Hausdorff distance.
arXiv Detail & Related papers (2020-01-13T18:31:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.