Recurrent Feature Propagation and Edge Skip-Connections for Automatic
Abdominal Organ Segmentation
- URL: http://arxiv.org/abs/2201.00317v2
- Date: Fri, 19 May 2023 04:25:57 GMT
- Title: Recurrent Feature Propagation and Edge Skip-Connections for Automatic
Abdominal Organ Segmentation
- Authors: Zefan Yang, Di Lin, Dong Ni and Yi Wang
- Abstract summary: We propose a 3D network with four main components trained end-to-end including encoder, edge detector, decoder with edge skip-connections and recurrent feature propagation head.
Experimental results show that the proposed network outperforms several state-of-the-art models.
- Score: 13.544665065396373
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic segmentation of abdominal organs in computed tomography (CT) images
can support radiation therapy and image-guided surgery workflows. Developing of
such automatic solutions remains challenging mainly owing to complex organ
interactions and blurry boundaries in CT images. To address these issues, we
focus on effective spatial context modeling and explicit edge segmentation
priors. Accordingly, we propose a 3D network with four main components trained
end-to-end including shared encoder, edge detector, decoder with edge
skip-connections (ESCs) and recurrent feature propagation head (RFP-Head). To
capture wide-range spatial dependencies, the RFP-Head propagates and harvests
local features through directed acyclic graphs (DAGs) formulated with recurrent
connections in an efficient slice-wise manner, with regard to spatial
arrangement of image units. To leverage edge information, the edge detector
learns edge prior knowledge specifically tuned for semantic segmentation by
exploiting intermediate features from the encoder with the edge supervision.
The ESCs then aggregate the edge knowledge with multi-level decoder features to
learn a hierarchy of discriminative features explicitly modeling
complementarity between organs' interiors and edges for segmentation. We
conduct extensive experiments on two challenging abdominal CT datasets with
eight annotated organs. Experimental results show that the proposed network
outperforms several state-of-the-art models, especially for the segmentation of
small and complicated structures (gallbladder, esophagus, stomach, pancreas and
duodenum). The code will be publicly available.
Related papers
- ASSNet: Adaptive Semantic Segmentation Network for Microtumors and Multi-Organ Segmentation [32.74195208408193]
Medical image segmentation is a crucial task in computer vision, supporting clinicians in diagnosis, treatment planning, and disease monitoring.
We propose the Adaptive Semantic Network (ASSNet), a transformer architecture that effectively integrates local and global features for precise medical image segmentation.
Tests on diverse medical image segmentation tasks, including multi-organ, liver tumor, and bladder tumor segmentation, demonstrate that ASSNet achieves state-of-the-art results.
arXiv Detail & Related papers (2024-09-12T06:25:44Z) - M3BUNet: Mobile Mean Max UNet for Pancreas Segmentation on CT-Scans [25.636974007788986]
We propose M3BUNet, a fusion of MobileNet and U-Net neural networks, equipped with a novel Mean-Max (MM) attention that operates in two stages to gradually segment pancreas CT images.
For the fine segmentation stage, we found that applying a wavelet decomposition filter to create multi-input images enhances pancreas segmentation performance.
Our approach demonstrates a considerable performance improvement, achieving an average Dice Similarity Coefficient (DSC) value of up to 89.53% and an Intersection Over Union (IOU) score of up to 81.16 for the NIH pancreas dataset.
arXiv Detail & Related papers (2024-01-18T23:10:08Z) - Structure-aware registration network for liver DCE-CT images [50.28546654316009]
We propose a novel structure-aware registration method by incorporating structural information of related organs with segmentation-guided deep registration network.
Our proposed method can achieve higher registration accuracy and preserve anatomical structure more effectively than state-of-the-art methods.
arXiv Detail & Related papers (2023-03-08T14:08:56Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - BCS-Net: Boundary, Context and Semantic for Automatic COVID-19 Lung
Infection Segmentation from CT Images [83.82141604007899]
BCS-Net is a novel network for automatic COVID-19 lung infection segmentation from CT images.
BCS-Net follows an encoder-decoder architecture, and more designs focus on the decoder stage.
In each BCSR block, the attention-guided global context (AGGC) module is designed to learn the most valuable encoder features for decoder.
arXiv Detail & Related papers (2022-07-17T08:54:07Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Spatially Dependent U-Nets: Highly Accurate Architectures for Medical
Imaging Segmentation [10.77039660100327]
We introduce a novel deep neural network architecture that exploits the inherent spatial coherence of anatomical structures.
Our approach is well equipped to capture long-range spatial dependencies in the segmented pixel/voxel space.
Our method performs favourably to commonly used U-Net and U-Net++ architectures.
arXiv Detail & Related papers (2021-03-22T10:37:20Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z) - Abdominal multi-organ segmentation with cascaded convolutional and
adversarial deep networks [0.36944296923226316]
We address fully-automated multi-organ segmentation from abdominal CT and MR images using deep learning.
Our pipeline provides promising results by outperforming state-of-the-art encoder-decoder schemes.
arXiv Detail & Related papers (2020-01-26T21:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.