FusionU-Net: U-Net with Enhanced Skip Connection for Pathology Image
Segmentation
- URL: http://arxiv.org/abs/2310.10951v1
- Date: Tue, 17 Oct 2023 02:56:10 GMT
- Title: FusionU-Net: U-Net with Enhanced Skip Connection for Pathology Image
Segmentation
- Authors: Zongyi Li, Hongbing Lyu, Jun Wang
- Abstract summary: FusionU-Net is based on U-Net structure and incorporates a fusion module to exchange information between different skip connections.
We conducted extensive experiments on multiple pathology image datasets to evaluate our model and found that FusionU-Net achieves better performance compared to other competing methods.
- Score: 9.70345458475663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, U-Net and its variants have been widely used in pathology
image segmentation tasks. One of the key designs of U-Net is the use of skip
connections between the encoder and decoder, which helps to recover detailed
information after upsampling. While most variations of U-Net adopt the original
skip connection design, there is semantic gap between the encoder and decoder
that can negatively impact model performance. Therefore, it is important to
reduce this semantic gap before conducting skip connection. To address this
issue, we propose a new segmentation network called FusionU-Net, which is based
on U-Net structure and incorporates a fusion module to exchange information
between different skip connections to reduce semantic gaps. Unlike the other
fusion modules in existing networks, ours is based on a two-round fusion design
that fully considers the local relevance between adjacent encoder layer outputs
and the need for bi-directional information exchange across multiple layers. We
conducted extensive experiments on multiple pathology image datasets to
evaluate our model and found that FusionU-Net achieves better performance
compared to other competing methods. We argue our fusion module is more
effective than the designs of existing networks, and it could be easily
embedded into other networks to further enhance the model performance.
Related papers
- Narrowing the semantic gaps in U-Net with learnable skip connections:
The case of medical image segmentation [12.812992773512871]
We propose a new segmentation framework, named UDTransNet, to solve three semantic gaps in U-Net.
Specifically, we propose a Dual Attention Transformer ( DAT) module for capturing the channel- and spatial-wise relationships, and a Decoder-guided Recalibration Attention (DRA) module for effectively connecting the DAT tokens and the decoder features.
Our UDTransNet produces higher evaluation scores and finer segmentation results with relatively fewer parameters over the state-of-the-art segmentation methods on different public datasets.
arXiv Detail & Related papers (2023-12-23T07:39:42Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - GFNet: Geometric Flow Network for 3D Point Cloud Semantic Segmentation [91.15865862160088]
We introduce a geometric flow network (GFNet) to explore the geometric correspondence between different views in an align-before-fuse manner.
Specifically, we devise a novel geometric flow module (GFM) to bidirectionally align and propagate the complementary information across different views.
arXiv Detail & Related papers (2022-07-06T11:48:08Z) - Encoder Fusion Network with Co-Attention Embedding for Referring Image
Segmentation [87.01669173673288]
We propose an encoder fusion network (EFN), which transforms the visual encoder into a multi-modal feature learning network.
A co-attention mechanism is embedded in the EFN to realize the parallel update of multi-modal features.
The experiment results on four benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-05-05T02:27:25Z) - Attentional Feature Fusion [4.265244011052538]
We propose a uniform and general scheme, namely attentional feature fusion.
We show that our models outperform state-of-the-art networks on both CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-09-29T15:10:18Z) - Suppress and Balance: A Simple Gated Network for Salient Object
Detection [89.88222217065858]
We propose a simple gated network (GateNet) to solve both issues at once.
With the help of multilevel gate units, the valuable context information from the encoder can be optimally transmitted to the decoder.
In addition, we adopt the atrous spatial pyramid pooling based on the proposed "Fold" operation (Fold-ASPP) to accurately localize salient objects of various scales.
arXiv Detail & Related papers (2020-07-16T02:00:53Z) - BiO-Net: Learning Recurrent Bi-directional Connections for
Encoder-Decoder Architecture [82.64881585566825]
We present a novel Bi-directional O-shape network (BiO-Net) that reuses the building blocks in a recurrent manner without introducing any extra parameters.
Our method significantly outperforms the vanilla U-Net as well as other state-of-the-art methods.
arXiv Detail & Related papers (2020-07-01T05:07:49Z) - Multi-Scale Boosted Dehazing Network with Dense Feature Fusion [92.92572594942071]
We propose a Multi-Scale Boosted Dehazing Network with Dense Feature Fusion based on the U-Net architecture.
We show that the proposed model performs favorably against the state-of-the-art approaches on the benchmark datasets as well as real-world hazy images.
arXiv Detail & Related papers (2020-04-28T09:34:47Z) - CAggNet: Crossing Aggregation Network for Medical Image Segmentation [4.56877715768796]
Crossing Aggregation Network (CAggNet) is a novel densely connected semantic segmentation approach for medical image analysis.
In CAggNet, the simple skip connection structure of general U-Net is replaced by aggregations of multi-level down-sampling and up-sampling layers.
We have evaluated and compared our CAggNet with several advanced U-Net based methods in two public medical image datasets.
arXiv Detail & Related papers (2020-04-16T15:39:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.