Multi-level feature fusion network combining attention mechanisms for
polyp segmentation
- URL: http://arxiv.org/abs/2309.10219v2
- Date: Sun, 24 Sep 2023 15:14:29 GMT
- Title: Multi-level feature fusion network combining attention mechanisms for
polyp segmentation
- Authors: Junzhuo Liu, Qiaosong Chen, Ye Zhang, Zhixiang Wang, Deng Xin, Jin
Wang
- Abstract summary: We propose a novel approach for polyp segmentation, named MLFF-Net, which leverages multi-level feature fusion and attention mechanisms.
MLFF-Net comprises three modules: Multi-scale Attention Module (MAM), High-level Feature Enhancement Module (HFEM), and Global Attention Module (GAM)
- Score: 11.971323720289249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clinically, automated polyp segmentation techniques have the potential to
significantly improve the efficiency and accuracy of medical diagnosis, thereby
reducing the risk of colorectal cancer in patients. Unfortunately, existing
methods suffer from two significant weaknesses that can impact the accuracy of
segmentation. Firstly, features extracted by encoders are not adequately
filtered and utilized. Secondly, semantic conflicts and information redundancy
caused by feature fusion are not attended to. To overcome these limitations, we
propose a novel approach for polyp segmentation, named MLFF-Net, which
leverages multi-level feature fusion and attention mechanisms. Specifically,
MLFF-Net comprises three modules: Multi-scale Attention Module (MAM),
High-level Feature Enhancement Module (HFEM), and Global Attention Module
(GAM). Among these, MAM is used to extract multi-scale information and polyp
details from the shallow output of the encoder. In HFEM, the deep features of
the encoders complement each other by aggregation. Meanwhile, the attention
mechanism redistributes the weight of the aggregated features, weakening the
conflicting redundant parts and highlighting the information useful to the
task. GAM combines features from the encoder and decoder features, as well as
computes global dependencies to prevent receptive field locality. Experimental
results on five public datasets show that the proposed method not only can
segment multiple types of polyps but also has advantages over current
state-of-the-art methods in both accuracy and generalization ability.
Related papers
- PVAFN: Point-Voxel Attention Fusion Network with Multi-Pooling Enhancing for 3D Object Detection [59.355022416218624]
integration of point and voxel representations is becoming more common in LiDAR-based 3D object detection.
We propose a novel two-stage 3D object detector, called Point-Voxel Attention Fusion Network (PVAFN)
PVAFN uses a multi-pooling strategy to integrate both multi-scale and region-specific information effectively.
arXiv Detail & Related papers (2024-08-26T19:43:01Z) - ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - A Mutual Inclusion Mechanism for Precise Boundary Segmentation in Medical Images [2.9137615132901704]
We present a novel deep learning-based approach, MIPC-Net, for precise boundary segmentation in medical images.
We introduce the MIPC module, which enhances the focus on channel information when extracting position features.
We also propose the GL-MIPC-Residue, a global residual connection that enhances the integration of the encoder and decoder.
arXiv Detail & Related papers (2024-04-12T02:14:35Z) - Edge-aware Feature Aggregation Network for Polyp Segmentation [40.3881565207086]
In this study, we present a novel Edge-aware Feature Aggregation Network (EFA-Net) for polyp segmentation.
EFA-Net can fully make use of cross-level and multi-scale features to enhance the performance of polyp segmentation.
Experimental results on five widely adopted colonoscopy datasets show that our EFA-Net outperforms state-of-the-art polyp segmentation methods in terms of generalization and effectiveness.
arXiv Detail & Related papers (2023-09-19T11:09:38Z) - Efficient Polyp Segmentation Via Integrity Learning [14.34505893948565]
This paper introduces the integrity concept in polyp segmentation at both macro and micro levels, aiming to alleviate integrity deficiency.
Our Integrity Capturing Polyp (IC-PolypSeg) network utilizes lightweight backbones and 3 key components for integrity ameliorating.
arXiv Detail & Related papers (2023-09-15T08:11:05Z) - Lesion-aware Dynamic Kernel for Polyp Segmentation [49.63274623103663]
We propose a lesion-aware dynamic network (LDNet) for polyp segmentation.
It is a traditional u-shape encoder-decoder structure incorporated with a dynamic kernel generation and updating scheme.
This simple but effective scheme endows our model with powerful segmentation performance and generalization capability.
arXiv Detail & Related papers (2023-01-12T09:53:57Z) - Hard Exudate Segmentation Supplemented by Super-Resolution with
Multi-scale Attention Fusion Module [14.021944194533644]
Hard exudates (HE) is the most specific biomarker for retina edema.
This paper proposes a novel hard exudates segmentation method named SS-MAF with an auxiliary super-resolution task.
We evaluate our method on two public lesion datasets, IDRiD and E-Ophtha.
arXiv Detail & Related papers (2022-11-17T08:25:04Z) - Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers [124.01928050651466]
We propose a new type of polyp segmentation method, named Polyp-PVT.
The proposed model, named Polyp-PVT, effectively suppresses noises in the features and significantly improves their expressive capabilities.
arXiv Detail & Related papers (2021-08-16T07:09:06Z) - Automatic Polyp Segmentation via Multi-scale Subtraction Network [100.94922587360871]
In clinical practice, precise polyp segmentation provides important information in the early detection of colorectal cancer.
Most existing methods are based on U-shape structure and use element-wise addition or concatenation to fuse different level features progressively in decoder.
We propose a multi-scale subtraction network (MSNet) to segment polyp from colonoscopy image.
arXiv Detail & Related papers (2021-08-11T07:54:07Z) - Max-Fusion U-Net for Multi-Modal Pathology Segmentation with Attention
and Dynamic Resampling [13.542898009730804]
The performance of relevant algorithms is significantly affected by the proper fusion of the multi-modal information.
We present the Max-Fusion U-Net that achieves improved pathology segmentation performance.
We evaluate our methods using the Myocardial pathology segmentation (MyoPS) combining the multi-sequence CMR dataset.
arXiv Detail & Related papers (2020-09-05T17:24:23Z) - PraNet: Parallel Reverse Attention Network for Polyp Segmentation [155.93344756264824]
We propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images.
We first aggregate the features in high-level layers using a parallel partial decoder (PPD)
In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues.
arXiv Detail & Related papers (2020-06-13T08:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.