Hierarchical 3D Feature Learning for Pancreas Segmentation
- URL: http://arxiv.org/abs/2109.01667v1
- Date: Fri, 3 Sep 2021 09:27:07 GMT
- Title: Hierarchical 3D Feature Learning for Pancreas Segmentation
- Authors: Federica Proietto Salanitri, Giovanni Bellitto, Ismail Irmakci, Simone
Palazzo, Ulas Bagci, Concetto Spampinato
- Abstract summary: We propose a novel 3D fully convolutional deep network for automated pancreas segmentation from both MRI and CT scans.
Our model outperforms existing methods on CT pancreas segmentation, obtaining an average Dice score of about 88%.
Additional control experiments demonstrate that the achieved performance is due to the combination of our 3D fully-convolutional deep network and the hierarchical representation decoding.
- Score: 11.588903060674344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel 3D fully convolutional deep network for automated pancreas
segmentation from both MRI and CT scans. More specifically, the proposed model
consists of a 3D encoder that learns to extract volume features at different
scales; features taken at different points of the encoder hierarchy are then
sent to multiple 3D decoders that individually predict intermediate
segmentation maps. Finally, all segmentation maps are combined to obtain a
unique detailed segmentation mask. We test our model on both CT and MRI imaging
data: the publicly available NIH Pancreas-CT dataset (consisting of 82
contrast-enhanced CTs) and a private MRI dataset (consisting of 40 MRI scans).
Experimental results show that our model outperforms existing methods on CT
pancreas segmentation, obtaining an average Dice score of about 88%, and yields
promising segmentation performance on a very challenging MRI data set (average
Dice score is about 77%). Additional control experiments demonstrate that the
achieved performance is due to the combination of our 3D fully-convolutional
deep network and the hierarchical representation decoding, thus substantiating
our architectural design.
Related papers
- fMRI-3D: A Comprehensive Dataset for Enhancing fMRI-based 3D Reconstruction [50.534007259536715]
We present the fMRI-3D dataset, which includes data from 15 participants and showcases a total of 4768 3D objects.
We propose MinD-3D, a novel framework designed to decode 3D visual information from fMRI signals.
arXiv Detail & Related papers (2024-09-17T16:13:59Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - M3BUNet: Mobile Mean Max UNet for Pancreas Segmentation on CT-Scans [25.636974007788986]
We propose M3BUNet, a fusion of MobileNet and U-Net neural networks, equipped with a novel Mean-Max (MM) attention that operates in two stages to gradually segment pancreas CT images.
For the fine segmentation stage, we found that applying a wavelet decomposition filter to create multi-input images enhances pancreas segmentation performance.
Our approach demonstrates a considerable performance improvement, achieving an average Dice Similarity Coefficient (DSC) value of up to 89.53% and an Intersection Over Union (IOU) score of up to 81.16 for the NIH pancreas dataset.
arXiv Detail & Related papers (2024-01-18T23:10:08Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - Med-DANet: Dynamic Architecture Network for Efficient Medical Volumetric
Segmentation [13.158995287578316]
We propose a dynamic architecture network named Med-DANet to achieve effective accuracy and efficiency trade-off.
For each slice of the input 3D MRI volume, our proposed method learns a slice-specific decision by the Decision Network.
Our proposed method achieves comparable or better results than previous state-of-the-art methods for 3D MRI brain tumor segmentation.
arXiv Detail & Related papers (2022-06-14T03:25:58Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Automatic Segmentation, Localization, and Identification of Vertebrae in
3D CT Images Using Cascaded Convolutional Neural Networks [22.572414102512358]
This paper presents a method for automatic segmentation, localization, and identification of vertebrae in 3D CT images.
Our method tackles all these tasks in a single multi-stage framework without any assumptions.
Our method achieved a mean Dice score of 96%, a mean localization error of 8.3 mm, and a mean identification rate of 84%.
arXiv Detail & Related papers (2020-09-29T06:11:37Z) - Multi-modal segmentation of 3D brain scans using neural networks [0.0]
Deep convolutional neural networks are trained to segment 3D MRI (MPRAGE, DWI, FLAIR) and CT scans.
segmentation quality is quantified using the Dice metric for a total of 27 anatomical structures.
arXiv Detail & Related papers (2020-08-11T09:13:54Z) - A$^3$DSegNet: Anatomy-aware artifact disentanglement and segmentation
network for unpaired segmentation, artifact reduction, and modality
translation [18.500206499468902]
CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects.
There exists a wealth of artifact-free, high quality CT images with vertebra annotations.
This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations.
arXiv Detail & Related papers (2020-01-02T06:37:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.