Dual-Attention Enhanced BDense-UNet for Liver Lesion Segmentation
- URL: http://arxiv.org/abs/2107.11645v1
- Date: Sat, 24 Jul 2021 16:28:00 GMT
- Title: Dual-Attention Enhanced BDense-UNet for Liver Lesion Segmentation
- Authors: Wenming Cao, Philip L.H. Yu, Gilbert C.S. Lui, Keith W.H. Chiu,
Ho-Ming Cheng, Yanwen Fang, Man-Fung Yuen, Wai-Kay Seto
- Abstract summary: We propose a new segmentation network by integrating DenseUNet and bidirectional LSTM together with attention mechanism, termed as DA-BDense-UNet.
DenseUNet allows learning enough diverse features and enhancing the representative power of networks by regulating the information flow.
- Score: 3.1667381240856987
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose a new segmentation network by integrating DenseUNet
and bidirectional LSTM together with attention mechanism, termed as
DA-BDense-UNet. DenseUNet allows learning enough diverse features and enhancing
the representative power of networks by regulating the information flow.
Bidirectional LSTM is responsible to explore the relationships between the
encoded features and the up-sampled features in the encoding and decoding
paths. Meanwhile, we introduce attention gates (AG) into DenseUNet to diminish
responses of unrelated background regions and magnify responses of salient
regions progressively. Besides, the attention in bidirectional LSTM takes into
account the contribution differences of the encoded features and the up-sampled
features in segmentation improvement, which can in turn adjust proper weights
for these two kinds of features. We conduct experiments on liver CT image data
sets collected from multiple hospitals by comparing them with state-of-the-art
segmentation models. Experimental results indicate that our proposed method
DA-BDense-UNet has achieved comparative performance in terms of dice
coefficient, which demonstrates its effectiveness.
Related papers
- Cross Prompting Consistency with Segment Anything Model for Semi-supervised Medical Image Segmentation [44.54301473673582]
Semi-supervised learning (SSL) has achieved notable progress in medical image segmentation.
Recent developments in visual foundation models, such as the Segment Anything Model (SAM), have demonstrated remarkable adaptability.
We propose a cross-prompting consistency method with segment anything model (CPC-SAM) for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2024-07-07T15:43:20Z) - DiffVein: A Unified Diffusion Network for Finger Vein Segmentation and
Authentication [50.017055360261665]
We introduce DiffVein, a unified diffusion model-based framework which simultaneously addresses vein segmentation and authentication tasks.
For better feature interaction between these two branches, we introduce two specialized modules.
In this way, our framework allows for a dynamic interplay between diffusion and segmentation embeddings.
arXiv Detail & Related papers (2024-02-03T06:49:42Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for
Semi-supervised Polyp Segmentation [52.06525450636897]
Automatic polyp segmentation plays a crucial role in the early diagnosis and treatment of colorectal cancer.
Existing methods rely heavily on fully supervised training, which requires a large amount of labeled data with time-consuming pixel-wise annotations.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised polyp (DEC-Seg) from colonoscopy images.
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Superresolution and Segmentation of OCT scans using Multi-Stage
adversarial Guided Attention Training [18.056525121226862]
We propose the multi-stage & multi-discriminatory generative adversarial network (MultiSDGAN) to translate OCT scans in high-resolution segmentation labels.
We evaluate and compare various combinations of channel and spatial attention to the MultiSDGAN architecture to extract more powerful feature maps.
Our results demonstrate relative improvements of 21.44% and 19.45% on the Dice coefficient and SSIM, respectively.
arXiv Detail & Related papers (2022-06-10T00:26:55Z) - Group Gated Fusion on Attention-based Bidirectional Alignment for
Multimodal Emotion Recognition [63.07844685982738]
This paper presents a new model named as Gated Bidirectional Alignment Network (GBAN), which consists of an attention-based bidirectional alignment network over LSTM hidden states.
We empirically show that the attention-aligned representations outperform the last-hidden-states of LSTM significantly.
The proposed GBAN model outperforms existing state-of-the-art multimodal approaches on the IEMOCAP dataset.
arXiv Detail & Related papers (2022-01-17T09:46:59Z) - A Tri-attention Fusion Guided Multi-modal Segmentation Network [2.867517731896504]
We propose a multi-modality segmentation network guided by a novel tri-attention fusion.
Our network includes N model-independent encoding paths with N image sources, a tri-attention fusion block, a dual-attention fusion block, and a decoding path.
Our experiment results tested on BraTS 2018 dataset for brain tumor segmentation demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-11-02T14:36:53Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Dual-Cross Central Difference Network for Face Anti-Spoofing [54.81222020394219]
Face anti-spoofing (FAS) plays a vital role in securing face recognition systems.
Central difference convolution (CDC) has shown its excellent representation capacity for the FAS task.
We propose two Cross Central Difference Convolutions (C-CDC), which exploit the difference of the center and surround sparse local features.
arXiv Detail & Related papers (2021-05-04T05:11:47Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.