Segment Anything Model for Brain Tumor Segmentation
- URL: http://arxiv.org/abs/2309.08434v2
- Date: Wed, 11 Sep 2024 12:27:55 GMT
- Title: Segment Anything Model for Brain Tumor Segmentation
- Authors: Peng Zhang, Yaping Wang,
- Abstract summary: Glioma is a prevalent brain tumor that poses a significant health risk to individuals.
The Segment Anything Model, released by Meta AI, is a fundamental model in image segmentation and has excellent zero-sample generalization capabilities.
In this study, we evaluated the performance of SAM on brain tumor segmentation and found that without any model fine-tuning, there is still a gap between SAM and the current state-of-the-art(SOTA) model.
- Score: 3.675657219384998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Glioma is a prevalent brain tumor that poses a significant health risk to individuals. Accurate segmentation of brain tumor is essential for clinical diagnosis and treatment. The Segment Anything Model(SAM), released by Meta AI, is a fundamental model in image segmentation and has excellent zero-sample generalization capabilities. Thus, it is interesting to apply SAM to the task of brain tumor segmentation. In this study, we evaluated the performance of SAM on brain tumor segmentation and found that without any model fine-tuning, there is still a gap between SAM and the current state-of-the-art(SOTA) model.
Related papers
- Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - Towards Generalizable Tumor Synthesis [48.45704270448412]
Tumor synthesis enables the creation of artificial tumors in medical images, facilitating the training of AI models for tumor detection and segmentation.
This paper made a progressive stride toward generalizable tumor synthesis by leveraging a critical observation.
We have ascertained that generative AI models, e.g., Diffusion Models, can create realistic tumors generalized to a range of organs even when trained on a limited number of tumor examples from only one organ.
arXiv Detail & Related papers (2024-02-29T18:57:39Z) - Segment anything model for head and neck tumor segmentation with CT, PET
and MRI multi-modality images [0.04924932828166548]
This study investigates the Segment Anything Model (SAM), recognized for requiring minimal human prompting.
We specifically examine MedSAM, a version of SAM fine-tuned with large-scale public medical images.
Our study demonstrates that fine-tuning SAM significantly enhances its segmentation accuracy, building upon the already effective zero-shot results.
arXiv Detail & Related papers (2024-02-27T12:26:45Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - Brain tumor multi classification and segmentation in MRI images using
deep learning [3.1248717814228923]
The classification model is based on the EfficientNetB1 architecture and is trained to classify images into four classes: meningioma, glioma, pituitary adenoma, and no tumor.
The segmentation model is based on the U-Net architecture and is trained to accurately segment the tumor from the MRI images.
arXiv Detail & Related papers (2023-04-20T01:32:55Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Dilated Inception U-Net (DIU-Net) for Brain Tumor Segmentation [0.9176056742068814]
We propose a new end-to-end brain tumor segmentation architecture based on U-Net.
Our proposed model performed significantly better than the state-of-the-art U-Net-based model for tumor core and whole tumor segmentation.
arXiv Detail & Related papers (2021-08-15T16:04:09Z) - MAG-Net: Mutli-task attention guided network for brain tumor
segmentation and classification [0.9176056742068814]
This paper proposes multi-task attention guided encoder-decoder network (MAG-Net) to classify and segment the brain tumor regions using MRI images.
The model achieved promising results as compared to existing state-of-the-art models.
arXiv Detail & Related papers (2021-07-26T16:51:00Z) - Scale-Space Autoencoders for Unsupervised Anomaly Segmentation in Brain
MRI [47.26574993639482]
We show improved anomaly segmentation performance and the general capability to obtain much more crisp reconstructions of input data at native resolution.
The modeling of the laplacian pyramid further enables the delineation and aggregation of lesions at multiple scales.
arXiv Detail & Related papers (2020-06-23T09:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.