Boundary-aware Context Neural Network for Medical Image Segmentation
- URL: http://arxiv.org/abs/2005.00966v1
- Date: Sun, 3 May 2020 02:35:49 GMT
- Title: Boundary-aware Context Neural Network for Medical Image Segmentation
- Authors: Ruxin Wang, Shuyuan Chen, Chaojie Ji, Jianping Fan, and Ye Li
- Abstract summary: Medical image segmentation can provide reliable basis for further clinical analysis and disease diagnosis.
Most existing CNNs-based methods produce unsatisfactory segmentation mask without accurate object boundaries.
In this paper, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation.
- Score: 15.585851505721433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image segmentation can provide a reliable basis for further clinical
analysis and disease diagnosis. The performance of medical image segmentation
has been significantly advanced with the convolutional neural networks (CNNs).
However, most existing CNNs-based methods often produce unsatisfactory
segmentation mask without accurate object boundaries. This is caused by the
limited context information and inadequate discriminative feature maps after
consecutive pooling and convolution operations. In that the medical image is
characterized by the high intra-class variation, inter-class indistinction and
noise, extracting powerful context and aggregating discriminative features for
fine-grained segmentation are still challenging today. In this paper, we
formulate a boundary-aware context neural network (BA-Net) for 2D medical image
segmentation to capture richer context and preserve fine spatial information.
BA-Net adopts encoder-decoder architecture. In each stage of encoder network,
pyramid edge extraction module is proposed for obtaining edge information with
multiple granularities firstly. Then we design a mini multi-task learning
module for jointly learning to segment object masks and detect lesion
boundaries. In particular, a new interactive attention is proposed to bridge
two tasks for achieving information complementarity between different tasks,
which effectively leverages the boundary information for offering a strong cue
to better segmentation prediction. At last, a cross feature fusion module aims
to selectively aggregate multi-level features from the whole encoder network.
By cascaded three modules, richer context and fine-grain features of each stage
are encoded. Extensive experiments on five datasets show that the proposed
BA-Net outperforms state-of-the-art approaches.
Related papers
- TransResNet: Integrating the Strengths of ViTs and CNNs for High Resolution Medical Image Segmentation via Feature Grafting [6.987177704136503]
High-resolution images are preferable in medical imaging domain as they significantly improve the diagnostic capability of the underlying method.
Most of the existing deep learning-based techniques for medical image segmentation are optimized for input images having small spatial dimensions and perform poorly on high-resolution images.
We propose a parallel-in-branch architecture called TransResNet, which incorporates Transformer and CNN in a parallel manner to extract features from multi-resolution images independently.
arXiv Detail & Related papers (2024-10-01T18:22:34Z) - MSA$^2$Net: Multi-scale Adaptive Attention-guided Network for Medical Image Segmentation [8.404273502720136]
We introduce MSA$2$Net, a new deep segmentation framework featuring an expedient design of skip-connections.
We propose a Multi-Scale Adaptive Spatial Attention Gate (MASAG) to ensure that spatially relevant features are selectively highlighted.
Our MSA$2$Net outperforms state-of-the-art (SOTA) works or matches their performance.
arXiv Detail & Related papers (2024-07-31T14:41:10Z) - BEFUnet: A Hybrid CNN-Transformer Architecture for Precise Medical Image
Segmentation [0.0]
This paper proposes an innovative U-shaped network called BEFUnet, which enhances the fusion of body and edge information for precise medical image segmentation.
The BEFUnet comprises three main modules, including a novel Local Cross-Attention Feature (LCAF) fusion module, a novel Double-Level Fusion (DLF) module, and dual-branch encoder.
The LCAF module efficiently fuses edge and body features by selectively performing local cross-attention on features that are spatially close between the two modalities.
arXiv Detail & Related papers (2024-02-13T21:03:36Z) - Scale-aware Super-resolution Network with Dual Affinity Learning for
Lesion Segmentation from Medical Images [50.76668288066681]
We present a scale-aware super-resolution network to adaptively segment lesions of various sizes from low-resolution medical images.
Our proposed network achieved consistent improvements compared to other state-of-the-art methods.
arXiv Detail & Related papers (2023-05-30T14:25:55Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - BCS-Net: Boundary, Context and Semantic for Automatic COVID-19 Lung
Infection Segmentation from CT Images [83.82141604007899]
BCS-Net is a novel network for automatic COVID-19 lung infection segmentation from CT images.
BCS-Net follows an encoder-decoder architecture, and more designs focus on the decoder stage.
In each BCSR block, the attention-guided global context (AGGC) module is designed to learn the most valuable encoder features for decoder.
arXiv Detail & Related papers (2022-07-17T08:54:07Z) - UNet#: A UNet-like Redesigning Skip Connections for Medical Image
Segmentation [13.767615201220138]
We propose a novel network structure combining dense skip connections and full-scale skip connections, named UNet-sharp (UNet#) for its shape similar to symbol #.
The proposed UNet# can aggregate feature maps of different scales in the decoder sub-network and capture fine-grained details and coarse-grained semantics from the full scale.
arXiv Detail & Related papers (2022-05-24T03:40:48Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - MFSNet: A Multi Focus Segmentation Network for Skin Lesion Segmentation [28.656853454251426]
This research develops an Artificial Intelligence (AI) framework for supervised skin lesion segmentation.
MFSNet, when evaluated on three publicly available datasets, outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-27T16:10:40Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.