SAUNet: Shape Attentive U-Net for Interpretable Medical Image
Segmentation
- URL: http://arxiv.org/abs/2001.07645v3
- Date: Mon, 16 Mar 2020 17:59:21 GMT
- Title: SAUNet: Shape Attentive U-Net for Interpretable Medical Image
Segmentation
- Authors: Jesse Sun, Fatemeh Darbehani, Mark Zaidi, and Bo Wang
- Abstract summary: We present a new architecture called Shape Attentive U-Net (SAUNet) which focuses on model interpretability and robustness.
Our method achieves state-of-the-art results on the two large public cardiac MRI image segmentation datasets of SUN09 and AC17.
- Score: 2.6837973648527926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image segmentation is a difficult but important task for many
clinical operations such as cardiac bi-ventricular volume estimation. More
recently, there has been a shift to utilizing deep learning and fully
convolutional neural networks (CNNs) to perform image segmentation that has
yielded state-of-the-art results in many public benchmark datasets. Despite the
progress of deep learning in medical image segmentation, standard CNNs are
still not fully adopted in clinical settings as they lack robustness and
interpretability. Shapes are generally more meaningful features than solely
textures of images, which are features regular CNNs learn, causing a lack of
robustness. Likewise, previous works surrounding model interpretability have
been focused on post hoc gradient-based saliency methods. However,
gradient-based saliency methods typically require additional computations post
hoc and have been shown to be unreliable for interpretability. Thus, we present
a new architecture called Shape Attentive U-Net (SAUNet) which focuses on model
interpretability and robustness. The proposed architecture attempts to address
these limitations by the use of a secondary shape stream that captures rich
shape-dependent information in parallel with the regular texture stream.
Furthermore, we suggest multi-resolution saliency maps can be learned using our
dual-attention decoder module which allows for multi-level interpretability and
mitigates the need for additional computations post hoc. Our method also
achieves state-of-the-art results on the two large public cardiac MRI image
segmentation datasets of SUN09 and AC17.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Evaluation of importance estimators in deep learning classifiers for
Computed Tomography [1.6710577107094642]
Interpretability of deep neural networks often relies on estimating the importance of input features.
Two versions of SmoothGrad topped the fidelity and ROC rankings, whereas both Integrated Gradients and SmoothGrad excelled in DSC evaluation.
There was a critical discrepancy between model-centric (fidelity) and human-centric (ROC and DSC) evaluation.
arXiv Detail & Related papers (2022-09-30T11:57:25Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical
Image Segmentation [1.1315617886931961]
We propose a novel split-attention u-shape network (DCSAU-Net) that extracts useful features using multi-scale combined split-attention and deeper depthwise convolution.
As a result, DCSAU-Net displays better performance than other state-of-the-art (SOTA) methods in terms of the mean Intersection over Union (mIoU) and F1-socre.
arXiv Detail & Related papers (2022-02-02T11:36:15Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Learning Fuzzy Clustering for SPECT/CT Segmentation via Convolutional
Neural Networks [5.3123694982708365]
Quantitative bone single-photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy.
The segmentation of anatomical regions-of-interests (ROIs) still relies heavily on the manual delineation by experts.
This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background.
arXiv Detail & Related papers (2021-04-17T19:03:52Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Boundary-aware Context Neural Network for Medical Image Segmentation [15.585851505721433]
Medical image segmentation can provide reliable basis for further clinical analysis and disease diagnosis.
Most existing CNNs-based methods produce unsatisfactory segmentation mask without accurate object boundaries.
In this paper, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation.
arXiv Detail & Related papers (2020-05-03T02:35:49Z) - A Spatially Constrained Deep Convolutional Neural Network for Nerve
Fiber Segmentation in Corneal Confocal Microscopic Images using Inaccurate
Annotations [10.761046991755311]
We propose a spatially constrained deep convolutional neural network (DCNN) to achieve smooth and robust image segmentation.
The proposed method has been evaluated based on corneal confocal microscopic ( CCM) images for nerve fiber segmentation.
arXiv Detail & Related papers (2020-04-20T16:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.