MedLSAM: Localize and Segment Anything Model for 3D CT Images
- URL: http://arxiv.org/abs/2306.14752v3
- Date: Thu, 16 Nov 2023 07:12:46 GMT
- Title: MedLSAM: Localize and Segment Anything Model for 3D CT Images
- Authors: Wenhui Lei, Xu Wei, Xiaofan Zhang, Kang Li, Shaoting Zhang
- Abstract summary: We develop a Localize Anything Model for 3D Medical Images (MedLAM)
MedLAM is capable of directly localizing any anatomical structure using just a few template scans.
It has the potential to be seamlessly integrated with future 3D SAM models.
- Score: 14.290321536041816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Segment Anything Model (SAM) has recently emerged as a groundbreaking
model in the field of image segmentation. Nevertheless, both the original SAM
and its medical adaptations necessitate slice-by-slice annotations, which
directly increase the annotation workload with the size of the dataset. We
propose MedLSAM to address this issue, ensuring a constant annotation workload
irrespective of dataset size and thereby simplifying the annotation process.
Our model introduces a 3D localization foundation model capable of localizing
any target anatomical part within the body. To achieve this, we develop a
Localize Anything Model for 3D Medical Images (MedLAM), utilizing two
self-supervision tasks: unified anatomical mapping (UAM) and multi-scale
similarity (MSS) across a comprehensive dataset of 14,012 CT scans. We then
establish a methodology for accurate segmentation by integrating MedLAM with
SAM. By annotating several extreme points across three directions on a few
templates, our model can autonomously identify the target anatomical region on
all data scheduled for annotation. This allows our framework to generate a 2D
bbox for every slice of the image, which is then leveraged by SAM to carry out
segmentation. We carried out comprehensive experiments on two 3D datasets
encompassing 38 distinct organs. Our findings are twofold: 1) MedLAM is capable
of directly localizing any anatomical structure using just a few template
scans, yet its performance surpasses that of fully supervised models; 2)
MedLSAM not only aligns closely with the performance of SAM and its specialized
medical adaptations with manual prompts but achieves this with minimal reliance
on extreme point annotations across the entire dataset. Furthermore, MedLAM has
the potential to be seamlessly integrated with future 3D SAM models, paving the
way for enhanced performance.
Related papers
- Few-Shot Adaptation of Training-Free Foundation Model for 3D Medical Image Segmentation [8.78725593323412]
Few-shot Adaptation of Training-frEe SAM (FATE-SAM) is a novel method designed to adapt the advanced Segment Anything Model 2 (SAM2) for 3D medical image segmentation.
FATE-SAM reassembles pre-trained modules of SAM2 to enable few-shot adaptation, leveraging a small number of support examples.
We evaluate FATE-SAM on multiple medical imaging datasets and compare it with supervised learning methods, zero-shot SAM approaches, and fine-tuned medical SAM methods.
arXiv Detail & Related papers (2025-01-15T20:44:21Z) - DB-SAM: Delving into High Quality Universal Medical Image Segmentation [100.63434169944853]
We propose a dual-branch adapted SAM framework, named DB-SAM, to bridge the gap between natural and 2D/3D medical data.
Our proposed DB-SAM achieves an absolute gain of 8.8%, compared to a recent medical SAM adapter in the literature.
arXiv Detail & Related papers (2024-10-05T14:36:43Z) - Novel adaptation of video segmentation to 3D MRI: efficient zero-shot knee segmentation with SAM2 [1.6237741047782823]
We introduce a method for zero-shot, single-prompt segmentation of 3D knee MRI by adapting Segment Anything Model 2.
By treating slices from 3D medical volumes as individual video frames, we leverage SAM2's advanced capabilities to generate motion- and spatially-aware predictions.
We demonstrate that SAM2 can efficiently perform segmentation tasks in a zero-shot manner with no additional training or fine-tuning.
arXiv Detail & Related papers (2024-08-08T21:39:15Z) - Interactive 3D Medical Image Segmentation with SAM 2 [17.523874868612577]
We explore the zero-shot capabilities of SAM 2, the next-generation Meta SAM model trained on videos, for 3D medical image segmentation.
By treating sequential 2D slices of 3D images as video frames, SAM 2 can fully automatically propagate annotations from a single frame to the entire 3D volume.
arXiv Detail & Related papers (2024-08-05T16:58:56Z) - Medical SAM 2: Segment medical images as video via Segment Anything Model 2 [17.469217682817586]
We introduce Medical SAM 2 (MedSAM-2), a generalized auto-tracking model for universal 2D and 3D medical image segmentation.
We evaluate MedSAM-2 on five 2D tasks and nine 3D tasks, including white blood cells, optic cups, retinal vessels, mandibles, coronary arteries, kidney tumors, liver tumors, breast cancer, nasopharynx cancer, vestibular schwan, mediastinal lymph nodules, cerebral artery, inferior alveolar nerve, and abdominal organs.
arXiv Detail & Related papers (2024-08-01T18:49:45Z) - M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models [49.5030774873328]
Previous research has primarily focused on 2D medical images, leaving 3D images under-explored, despite their richer spatial information.
We present a large-scale 3D multi-modal medical dataset, M3D-Data, comprising 120K image-text pairs and 662K instruction-response pairs.
We also introduce a new 3D multi-modal medical benchmark, M3D-Bench, which facilitates automatic evaluation across eight tasks.
arXiv Detail & Related papers (2024-03-31T06:55:12Z) - SAM-Med3D: Towards General-purpose Segmentation Models for Volumetric Medical Images [35.83393121891959]
We introduce SAM-Med3D for general-purpose segmentation on volumetric medical images.
SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities.
Our approach demonstrates that substantial medical resources can be utilized to develop a general-purpose medical AI.
arXiv Detail & Related papers (2023-10-23T17:57:36Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - SAM3D: Segment Anything Model in Volumetric Medical Images [11.764867415789901]
We introduce SAM3D, an innovative adaptation tailored for 3D volumetric medical image analysis.
Unlike current SAM-based methods that segment volumetric data by converting the volume into separate 2D slices for individual analysis, our SAM3D model processes the entire 3D volume image in a unified approach.
arXiv Detail & Related papers (2023-09-07T06:05:28Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM [6.172995387355581]
We introduce Segment Any Medical Model (SAMM), an extension of SAM on 3D Slicer.
SAMM achieves 0.6-second latency of a complete cycle and can infer image masks in nearly real-time.
arXiv Detail & Related papers (2023-04-12T05:39:38Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.