Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI
- URL: http://arxiv.org/abs/2408.05803v1
- Date: Sun, 11 Aug 2024 15:46:00 GMT
- Title: Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI
- Authors: Lei Zhou, Yuzhong Zhang, Jiadong Zhang, Xuejun Qian, Chen Gong, Kun Sun, Zhongxiang Ding, Xing Wang, Zhenhui Li, Zaiyi Liu, Dinggang Shen,
- Abstract summary: We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
- Score: 58.809276442508256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated breast tumor segmentation on the basis of dynamic contrast-enhancement magnetic resonance imaging (DCE-MRI) has shown great promise in clinical practice, particularly for identifying the presence of breast disease. However, accurate segmentation of breast tumor is a challenging task, often necessitating the development of complex networks. To strike an optimal trade-off between computational costs and segmentation performance, we propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers. Specifically, the hybrid network consists of a encoder-decoder architecture by stacking convolution and decovolution layers. Effective 3D transformer layers are then implemented after the encoder subnetworks, to capture global dependencies between the bottleneck features. To improve the efficiency of hybrid network, two parallel encoder subnetworks are designed for the decoder and the transformer layers, respectively. To further enhance the discriminative capability of hybrid network, a prototype learning guided prediction module is proposed, where the category-specified prototypical features are calculated through on-line clustering. All learned prototypical features are finally combined with the features from decoder for tumor mask prediction. The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network achieves superior performance than the state-of-the-art (SOTA) methods, while maintaining balance between segmentation accuracy and computation cost. Moreover, we demonstrate that automatically generated tumor masks can be effectively applied to identify HER2-positive subtype from HER2-negative subtype with the similar accuracy to the analysis based on manual tumor segmentation. The source code is available at https://github.com/ZhouL-lab/PLHN.
Related papers
- SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Enhancing Motor Imagery Decoding in Brain Computer Interfaces using
Riemann Tangent Space Mapping and Cross Frequency Coupling [5.860347939369221]
Motor Imagery (MI) serves as a crucial experimental paradigm within the realm of Brain Computer Interfaces (BCIs)
This paper introduces a novel approach to enhance the representation quality and decoding capability pertaining to MI features.
A lightweight convolutional neural network is employed for further feature extraction and classification, operating under the joint supervision of cross-entropy and center loss.
arXiv Detail & Related papers (2023-10-29T23:37:47Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - MaxViT-UNet: Multi-Axis Attention for Medical Image Segmentation [0.46040036610482665]
MaxViT-UNet is a hybrid vision transformer (CNN-Transformer) for medical image segmentation.
The proposed Hybrid Decoder is designed to harness the power of both the convolution and self-attention mechanisms at each decoding stage.
The inclusion of multi-axis self-attention, within each decoder stage, significantly enhances the discriminating capacity between the object and background regions.
arXiv Detail & Related papers (2023-05-15T07:23:54Z) - An Unpaired Cross-modality Segmentation Framework Using Data
Augmentation and Hybrid Convolutional Networks for Segmenting Vestibular
Schwannoma and Cochlea [7.7150383247700605]
The crossMoDA challenge aims to automatically segment the vestibular schwannoma (VS) tumor and cochlea regions of unlabeled high-resolution T2 scans.
The 2022 edition extends the segmentation task by including multi-institutional scans.
We propose an unpaired cross-modality segmentation framework using data augmentation and hybrid convolutional networks.
arXiv Detail & Related papers (2022-11-28T01:15:33Z) - A Transformer-based Generative Adversarial Network for Brain Tumor
Segmentation [4.394247741333439]
We propose a transformer-based generative adversarial network to automatically segment brain tumors with multi-modalities MRI.
Our architecture consists of a generator and a discriminator, which are trained in min-max game progress.
The discriminator we designed is a CNN-based network with multi-scale $L_1$ loss, which is proved to be effective for medical semantic image segmentation.
arXiv Detail & Related papers (2022-07-28T14:55:18Z) - Transformer based Generative Adversarial Network for Liver Segmentation [4.317557160310758]
We propose a new segmentation approach using a hybrid approach combining the Transformer(s) with the Generative Adversarial Network (GAN) approach.
Our model achieved a high dice coefficient of 0.9433, recall of 0.9515, and precision of 0.9376 and outperformed other Transformer based approaches.
arXiv Detail & Related papers (2022-05-21T19:55:43Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - CleftNet: Augmented Deep Learning for Synaptic Cleft Detection from
Brain Electron Microscopy [49.3704402041314]
We propose a novel and augmented deep learning model, known as CleftNet, for improving synaptic cleft detection from brain EM images.
We first propose two novel network components, known as the feature augmentor and the label augmentor, for augmenting features and labels to improve cleft representations.
arXiv Detail & Related papers (2021-01-12T02:45:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.