Cross-modality Attention Adapter: A Glioma Segmentation Fine-tuning
Method for SAM Using Multimodal Brain MR Images
- URL: http://arxiv.org/abs/2307.01124v1
- Date: Mon, 3 Jul 2023 15:55:18 GMT
- Title: Cross-modality Attention Adapter: A Glioma Segmentation Fine-tuning
Method for SAM Using Multimodal Brain MR Images
- Authors: Xiaoyu Shi, Shurong Chai, Yinhao Li, Jingliang Cheng, Jie Bai, Guohua
Zhao and Yen-Wei Chen
- Abstract summary: We propose a cross-modality attention adapter based on multimodal fusion to fine-tune the foundation model to accomplish the task of glioma segmentation in multimodal MRI brain images.
Our proposed method is superior to current state-of-the-art methods with a Dice of 88.38% and Hausdorff distance of 10.64, thereby exhibiting a 4% increase in Dice to segment the glioma region for glioma treatment.
- Score: 7.8475485225910555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: According to the 2021 World Health Organization (WHO) Classification scheme
for gliomas, glioma segmentation is a very important basis for diagnosis and
genotype prediction. In general, 3D multimodal brain MRI is an effective
diagnostic tool. In the past decade, there has been an increase in the use of
machine learning, particularly deep learning, for medical images processing.
Thanks to the development of foundation models, models pre-trained with
large-scale datasets have achieved better results on a variety of tasks.
However, for medical images with small dataset sizes, deep learning methods
struggle to achieve better results on real-world image datasets. In this paper,
we propose a cross-modality attention adapter based on multimodal fusion to
fine-tune the foundation model to accomplish the task of glioma segmentation in
multimodal MRI brain images with better results. The effectiveness of the
proposed method is validated via our private glioma data set from the First
Affiliated Hospital of Zhengzhou University (FHZU) in Zhengzhou, China. Our
proposed method is superior to current state-of-the-art methods with a Dice of
88.38% and Hausdorff distance of 10.64, thereby exhibiting a 4% increase in
Dice to segment the glioma region for glioma treatment.
Related papers
- Unified HT-CNNs Architecture: Transfer Learning for Segmenting Diverse Brain Tumors in MRI from Gliomas to Pediatric Tumors [2.104687387907779]
We introduce HT-CNNs, an ensemble of Hybrid Transformers and Convolutional Neural Networks optimized through transfer learning for varied brain tumor segmentation.
This method captures spatial and contextual details from MRI data, fine-tuned on diverse datasets representing common tumor types.
Our findings underscore the potential of transfer learning and ensemble approaches in medical image segmentation, indicating a substantial enhancement in clinical decision-making and patient care.
arXiv Detail & Related papers (2024-12-11T09:52:01Z) - Cross-Modal Domain Adaptation in Brain Disease Diagnosis: Maximum Mean Discrepancy-based Convolutional Neural Networks [0.0]
Brain disorders are a major challenge to global health, causing millions of deaths each year.
Accurate diagnosis of these diseases relies heavily on advanced medical imaging techniques such as MRI and CT.
The scarcity of annotated data poses a significant challenge in deploying machine learning models for medical diagnosis.
arXiv Detail & Related papers (2024-05-06T07:44:46Z) - GAN-GA: A Generative Model based on Genetic Algorithm for Medical Image
Generation [0.0]
Generative models offer a promising solution for addressing medical image shortage problems.
This paper proposes the GAN-GA, a generative model optimized by embedding a genetic algorithm.
The proposed model enhances image fidelity and diversity while preserving distinctive features.
arXiv Detail & Related papers (2023-12-30T20:16:45Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.