Feedback Chain Network For Hippocampus Segmentation
- URL: http://arxiv.org/abs/2211.07891v1
- Date: Tue, 15 Nov 2022 04:32:10 GMT
- Title: Feedback Chain Network For Hippocampus Segmentation
- Authors: Heyu Huang, Runmin Cong, Lianhe Yang, Ling Du, Cong Wang, and Sam
Kwong
- Abstract summary: We propose a novel hierarchical feedback chain network for the hippocampus segmentation task.
The proposed approach achieves state-of-the-art performance on three publicly available datasets.
- Score: 59.74305660815117
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The hippocampus plays a vital role in the diagnosis and treatment of many
neurological disorders. Recent years, deep learning technology has made great
progress in the field of medical image segmentation, and the performance of
related tasks has been constantly refreshed. In this paper, we focus on the
hippocampus segmentation task and propose a novel hierarchical feedback chain
network. The feedback chain structure unit learns deeper and wider feature
representation of each encoder layer through the hierarchical feature
aggregation feedback chains, and achieves feature selection and feedback
through the feature handover attention module. Then, we embed a global pyramid
attention unit between the feature encoder and the decoder to further modify
the encoder features, including the pair-wise pyramid attention module for
achieving adjacent attention interaction and the global context modeling module
for capturing the long-range knowledge. The proposed approach achieves
state-of-the-art performance on three publicly available datasets, compared
with existing hippocampus segmentation approaches.
Related papers
- U-Net v2: Rethinking the Skip Connections of U-Net for Medical Image Segmentation [14.450329809640422]
We introduce U-Net v2, a new robust and efficient U-Net variant for medical image segmentation.
It aims to augment the infusion of semantic information into low-level features while simultaneously refining high-level features with finer details.
arXiv Detail & Related papers (2023-11-29T16:35:24Z) - CAT: Learning to Collaborate Channel and Spatial Attention from
Multi-Information Fusion [23.72040577828098]
We propose a plug-and-play attention module, which we term "CAT"-activating the Collaboration between spatial and channel Attentions.
Specifically, we represent traits as trainable coefficients (i.e., colla-factors) to adaptively combine contributions of different attention modules.
Our CAT outperforms existing state-of-the-art attention mechanisms in object detection, instance segmentation, and image classification.
arXiv Detail & Related papers (2022-12-13T02:34:10Z) - Cross-Enhancement Transformer for Action Segmentation [5.752561578852787]
A novel encoder-decoder structure is proposed in this paper, called Cross-Enhancement Transformer.
Our approach can be effective learning of temporal structure representation with interactive self-attention mechanism.
In addition, a new loss function is proposed to enhance the training process that penalizes over-segmentation errors.
arXiv Detail & Related papers (2022-05-19T10:06:30Z) - Encoder Fusion Network with Co-Attention Embedding for Referring Image
Segmentation [87.01669173673288]
We propose an encoder fusion network (EFN), which transforms the visual encoder into a multi-modal feature learning network.
A co-attention mechanism is embedded in the EFN to realize the parallel update of multi-modal features.
The experiment results on four benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-05-05T02:27:25Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Boundary-aware Context Neural Network for Medical Image Segmentation [15.585851505721433]
Medical image segmentation can provide reliable basis for further clinical analysis and disease diagnosis.
Most existing CNNs-based methods produce unsatisfactory segmentation mask without accurate object boundaries.
In this paper, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation.
arXiv Detail & Related papers (2020-05-03T02:35:49Z) - Multi-Granularity Reference-Aided Attentive Feature Aggregation for
Video-based Person Re-identification [98.7585431239291]
Video-based person re-identification aims at matching the same person across video clips.
In this paper, we propose an attentive feature aggregation module, namely Multi-Granularity Reference-Attentive Feature aggregation module MG-RAFA.
Our framework achieves the state-of-the-art ablation performance on three benchmark datasets.
arXiv Detail & Related papers (2020-03-27T03:49:21Z) - Global Context-Aware Progressive Aggregation Network for Salient Object
Detection [117.943116761278]
We propose a novel network named GCPANet to integrate low-level appearance features, high-level semantic features, and global context features.
We show that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-03-02T04:26:10Z) - See More, Know More: Unsupervised Video Object Segmentation with
Co-Attention Siamese Networks [184.4379622593225]
We introduce a novel network, called CO-attention Siamese Network (COSNet), to address the unsupervised video object segmentation task.
We emphasize the importance of inherent correlation among video frames and incorporate a global co-attention mechanism.
We propose a unified and end-to-end trainable framework where different co-attention variants can be derived for mining the rich context within videos.
arXiv Detail & Related papers (2020-01-19T11:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.