Transformer based Generative Adversarial Network for Liver Segmentation
- URL: http://arxiv.org/abs/2205.10663v1
- Date: Sat, 21 May 2022 19:55:43 GMT
- Title: Transformer based Generative Adversarial Network for Liver Segmentation
- Authors: Ugur Demir, Zheyuan Zhang, Bin Wang, Matthew Antalek, Elif Keles,
Debesh Jha, Amir Borhani, Daniela Ladner and Ulas Bagci
- Abstract summary: We propose a new segmentation approach using a hybrid approach combining the Transformer(s) with the Generative Adversarial Network (GAN) approach.
Our model achieved a high dice coefficient of 0.9433, recall of 0.9515, and precision of 0.9376 and outperformed other Transformer based approaches.
- Score: 4.317557160310758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated liver segmentation from radiology scans (CT, MRI) can improve
surgery and therapy planning and follow-up assessment in addition to
conventional use for diagnosis and prognosis. Although convolutional neural
networks (CNNs) have become the standard image segmentation tasks, more
recently this has started to change towards Transformers based architectures
because Transformers are taking advantage of capturing long range dependence
modeling capability in signals, so called attention mechanism. In this study,
we propose a new segmentation approach using a hybrid approach combining the
Transformer(s) with the Generative Adversarial Network (GAN) approach. The
premise behind this choice is that the self-attention mechanism of the
Transformers allows the network to aggregate the high dimensional feature and
provide global information modeling. This mechanism provides better
segmentation performance compared with traditional methods. Furthermore, we
encode this generator into the GAN based architecture so that the discriminator
network in the GAN can classify the credibility of the generated segmentation
masks compared with the real masks coming from human (expert) annotations. This
allows us to extract the high dimensional topology information in the mask for
biomedical image segmentation and provide more reliable segmentation results.
Our model achieved a high dice coefficient of 0.9433, recall of 0.9515, and
precision of 0.9376 and outperformed other Transformer based approaches.
Related papers
- TransUKAN:Computing-Efficient Hybrid KAN-Transformer for Enhanced Medical Image Segmentation [5.280523424712006]
U-Net is currently the most widely used architecture for medical image segmentation.
We have improved the KAN to reduce memory usage and computational load.
This approach enhances the model's capability to capture nonlinear relationships.
arXiv Detail & Related papers (2024-09-23T02:52:49Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - SeUNet-Trans: A Simple yet Effective UNet-Transformer Model for Medical
Image Segmentation [0.0]
We propose a simple yet effective UNet-Transformer (seUNet-Trans) model for medical image segmentation.
In our approach, the UNet model is designed as a feature extractor to generate multiple feature maps from the input images.
By leveraging the UNet architecture and the self-attention mechanism, our model not only retains the preservation of both local and global context information but also is capable of capturing long-range dependencies between input elements.
arXiv Detail & Related papers (2023-10-16T01:13:38Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - A Transformer-based Generative Adversarial Network for Brain Tumor
Segmentation [4.394247741333439]
We propose a transformer-based generative adversarial network to automatically segment brain tumors with multi-modalities MRI.
Our architecture consists of a generator and a discriminator, which are trained in min-max game progress.
The discriminator we designed is a CNN-based network with multi-scale $L_1$ loss, which is proved to be effective for medical semantic image segmentation.
arXiv Detail & Related papers (2022-07-28T14:55:18Z) - TransNorm: Transformer Provides a Strong Spatial Normalization Mechanism
for a Deep Segmentation Model [4.320393382724066]
convolutional neural networks (CNNs) have been the prevailing technique in the medical image processing era.
We propose Trans-Norm, a novel deep segmentation framework which consolidates a Transformer module into both encoder and skip-connections of the standard U-Net.
arXiv Detail & Related papers (2022-07-27T09:54:10Z) - Evaluating Transformer based Semantic Segmentation Networks for
Pathological Image Segmentation [2.7029872968576947]
Histopathology has played an essential role in cancer diagnosis.
Various CNN-based automated pathological image segmentation approaches have been developed in computer-assisted pathological image analysis.
Transformer neural networks (Transformer) have shown the unique merit of capturing the global long distance dependencies across the entire image as a new deep learning paradigm.
arXiv Detail & Related papers (2021-08-26T18:46:43Z) - Medical Transformer: Gated Axial-Attention for Medical Image
Segmentation [73.98974074534497]
We study the feasibility of using Transformer-based network architectures for medical image segmentation tasks.
We propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module.
To train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance.
arXiv Detail & Related papers (2021-02-21T18:35:14Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.