Hard Exudate Segmentation Supplemented by Super-Resolution with
Multi-scale Attention Fusion Module
- URL: http://arxiv.org/abs/2211.09404v1
- Date: Thu, 17 Nov 2022 08:25:04 GMT
- Title: Hard Exudate Segmentation Supplemented by Super-Resolution with
Multi-scale Attention Fusion Module
- Authors: Jiayi Zhang, Xiaoshan Chen, Zhongxi Qiu, Mingming Yang, Yan Hu, Jiang
Liu
- Abstract summary: Hard exudates (HE) is the most specific biomarker for retina edema.
This paper proposes a novel hard exudates segmentation method named SS-MAF with an auxiliary super-resolution task.
We evaluate our method on two public lesion datasets, IDRiD and E-Ophtha.
- Score: 14.021944194533644
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Hard exudates (HE) is the most specific biomarker for retina edema. Precise
HE segmentation is vital for disease diagnosis and treatment, but automatic
segmentation is challenged by its large variation of characteristics including
size, shape and position, which makes it difficult to detect tiny lesions and
lesion boundaries. Considering the complementary features between segmentation
and super-resolution tasks, this paper proposes a novel hard exudates
segmentation method named SS-MAF with an auxiliary super-resolution task, which
brings in helpful detailed features for tiny lesion and boundaries detection.
Specifically, we propose a fusion module named Multi-scale Attention Fusion
(MAF) module for our dual-stream framework to effectively integrate features of
the two tasks. MAF first adopts split spatial convolutional (SSC) layer for
multi-scale features extraction and then utilize attention mechanism for
features fusion of the two tasks. Considering pixel dependency, we introduce
region mutual information (RMI) loss to optimize MAF module for tiny lesions
and boundary detection. We evaluate our method on two public lesion datasets,
IDRiD and E-Ophtha. Our method shows competitive performance with
low-resolution inputs, both quantitatively and qualitatively. On E-Ophtha
dataset, the method can achieve $\geq3\%$ higher dice and recall compared with
the state-of-the-art methods.
Related papers
- Multi-scale Information Sharing and Selection Network with Boundary Attention for Polyp Segmentation [10.152504573356413]
We propose a Multi-scale information sharing and selection network (MISNet) for polyp segmentation task.
Experiments on five polyp segmentation datasets demonstrate that MISNet successfully improved the accuracy and clarity of segmentation result.
arXiv Detail & Related papers (2024-05-18T02:48:39Z) - A Mutual Inclusion Mechanism for Precise Boundary Segmentation in Medical Images [2.9137615132901704]
We present a novel deep learning-based approach, MIPC-Net, for precise boundary segmentation in medical images.
We introduce the MIPC module, which enhances the focus on channel information when extracting position features.
We also propose the GL-MIPC-Residue, a global residual connection that enhances the integration of the encoder and decoder.
arXiv Detail & Related papers (2024-04-12T02:14:35Z) - Semi- and Weakly-Supervised Learning for Mammogram Mass Segmentation with Limited Annotations [49.33388736227072]
We propose a semi- and weakly-supervised learning framework for mass segmentation.
We use limited strongly-labeled samples and sufficient weakly-labeled samples to achieve satisfactory performance.
Experiments on CBIS-DDSM and INbreast datasets demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-03-14T12:05:25Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Multi-level feature fusion network combining attention mechanisms for
polyp segmentation [11.971323720289249]
We propose a novel approach for polyp segmentation, named MLFF-Net, which leverages multi-level feature fusion and attention mechanisms.
MLFF-Net comprises three modules: Multi-scale Attention Module (MAM), High-level Feature Enhancement Module (HFEM), and Global Attention Module (GAM)
arXiv Detail & Related papers (2023-09-19T00:18:50Z) - Scale-aware Super-resolution Network with Dual Affinity Learning for
Lesion Segmentation from Medical Images [50.76668288066681]
We present a scale-aware super-resolution network to adaptively segment lesions of various sizes from low-resolution medical images.
Our proposed network achieved consistent improvements compared to other state-of-the-art methods.
arXiv Detail & Related papers (2023-05-30T14:25:55Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Multimodal Multi-Head Convolutional Attention with Various Kernel Sizes
for Medical Image Super-Resolution [56.622832383316215]
We propose a novel multi-head convolutional attention module to super-resolve CT and MRI scans.
Our attention module uses the convolution operation to perform joint spatial-channel attention on multiple input tensors.
We introduce multiple attention heads, each head having a distinct receptive field size corresponding to a particular reduction rate for the spatial attention.
arXiv Detail & Related papers (2022-04-08T07:56:55Z) - Max-Fusion U-Net for Multi-Modal Pathology Segmentation with Attention
and Dynamic Resampling [13.542898009730804]
The performance of relevant algorithms is significantly affected by the proper fusion of the multi-modal information.
We present the Max-Fusion U-Net that achieves improved pathology segmentation performance.
We evaluate our methods using the Myocardial pathology segmentation (MyoPS) combining the multi-sequence CMR dataset.
arXiv Detail & Related papers (2020-09-05T17:24:23Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.