Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained
Deep Neural Networks
- URL: http://arxiv.org/abs/2210.04285v1
- Date: Sun, 9 Oct 2022 15:31:19 GMT
- Title: Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained
Deep Neural Networks
- Authors: Samra Irshad, Douglas P.S. Gomes and Seong Tae Kim
- Abstract summary: We train the 3D encoder-decoder network to simultaneously segment the abdominal organs and their corresponding boundaries in CT scans.
We evaluate the utilization of complementary boundary prediction task in improving the abdominal multi-organ segmentation.
- Score: 9.416108287575915
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantitative assessment of the abdominal region from clinically acquired CT
scans requires the simultaneous segmentation of abdominal organs. Thanks to the
availability of high-performance computational resources, deep learning-based
methods have resulted in state-of-the-art performance for the segmentation of
3D abdominal CT scans. However, the complex characterization of organs with
fuzzy boundaries prevents the deep learning methods from accurately segmenting
these anatomical organs. Specifically, the voxels on the boundary of organs are
more vulnerable to misprediction due to the highly-varying intensity of
inter-organ boundaries. This paper investigates the possibility of improving
the abdominal image segmentation performance of the existing 3D encoder-decoder
networks by leveraging organ-boundary prediction as a complementary task. To
address the problem of abdominal multi-organ segmentation, we train the 3D
encoder-decoder network to simultaneously segment the abdominal organs and
their corresponding boundaries in CT scans via multi-task learning. The network
is trained end-to-end using a loss function that combines two task-specific
losses, i.e., complete organ segmentation loss and boundary prediction loss. We
explore two different network topologies based on the extent of weights shared
between the two tasks within a unified multi-task framework. To evaluate the
utilization of complementary boundary prediction task in improving the
abdominal multi-organ segmentation, we use three state-of-the-art
encoder-decoder networks: 3D UNet, 3D UNet++, and 3D Attention-UNet. The
effectiveness of utilizing the organs' boundary information for abdominal
multi-organ segmentation is evaluated on two publically available abdominal CT
datasets. A maximum relative improvement of 3.5% and 3.6% is observed in Mean
Dice Score for Pancreas-CT and BTCV datasets, respectively.
Related papers
- Scribble-based 3D Multiple Abdominal Organ Segmentation via
Triple-branch Multi-dilated Network with Pixel- and Class-wise Consistency [20.371144313009122]
We propose a novel 3D framework with two consistency constraints for scribble-supervised multiple abdominal organ segmentation from CT.
For more stable unsupervised learning, we use voxel-wise uncertainty to rectify the soft pseudo labels and then supervise the outputs of each decoder.
Experiments on the public WORD dataset show that our method outperforms five existing scribble-supervised methods.
arXiv Detail & Related papers (2023-09-18T12:50:58Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Boundary-Aware Network for Abdominal Multi-Organ Segmentation [21.079667938055668]
We propose a boundary-aware network (BA-Net) to segment abdominal organs on CT scans and MRI scans.
The results demonstrate that BA-Net is superior to nnUNet on both segmentation tasks.
arXiv Detail & Related papers (2022-08-29T02:24:02Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Recurrent Feature Propagation and Edge Skip-Connections for Automatic
Abdominal Organ Segmentation [13.544665065396373]
We propose a 3D network with four main components trained end-to-end including encoder, edge detector, decoder with edge skip-connections and recurrent feature propagation head.
Experimental results show that the proposed network outperforms several state-of-the-art models.
arXiv Detail & Related papers (2022-01-02T08:33:19Z) - Deep Reinforcement Learning for Organ Localization in CT [59.23083161858951]
We propose a deep reinforcement learning approach for organ localization in CT.
In this work, an artificial agent is actively self-taught to localize organs in CT by learning from its asserts and mistakes.
Our method can use as a plug-and-play module for localizing any organ of interest.
arXiv Detail & Related papers (2020-05-11T10:06:13Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - Abdominal multi-organ segmentation with cascaded convolutional and
adversarial deep networks [0.36944296923226316]
We address fully-automated multi-organ segmentation from abdominal CT and MR images using deep learning.
Our pipeline provides promising results by outperforming state-of-the-art encoder-decoder schemes.
arXiv Detail & Related papers (2020-01-26T21:28:04Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.