TomoSAM: a 3D Slicer extension using SAM for tomography segmentation
- URL: http://arxiv.org/abs/2306.08609v1
- Date: Wed, 14 Jun 2023 16:13:27 GMT
- Title: TomoSAM: a 3D Slicer extension using SAM for tomography segmentation
- Authors: Federico Semeraro, Alexandre Quintart, Sergio Fraile Izquierdo, Joseph
C. Ferguson
- Abstract summary: TomoSAM has been developed to integrate the cutting-edge Segment Anything Model (SAM) into 3D Slicer.
SAM is a promptable deep learning model that is able to identify objects and create image masks in a zero-shot manner.
The synergy between these tools aids in the segmentation of complex 3D datasets from tomography or other imaging techniques.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: TomoSAM has been developed to integrate the cutting-edge Segment Anything
Model (SAM) into 3D Slicer, a highly capable software platform used for 3D
image processing and visualization. SAM is a promptable deep learning model
that is able to identify objects and create image masks in a zero-shot manner,
based only on a few user clicks. The synergy between these tools aids in the
segmentation of complex 3D datasets from tomography or other imaging
techniques, which would otherwise require a laborious manual segmentation
process. The source code associated with this article can be found at
https://github.com/fsemerar/SlicerTomoSAM
Related papers
- SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners [87.76470518069338]
We introduce SAM2Point, a preliminary exploration adapting Segment Anything Model 2 (SAM 2) for promptable 3D segmentation.
Our framework supports various prompt types, including 3D points, boxes, and masks, and can generalize across diverse scenarios, such as 3D objects, indoor scenes, sparse outdoor environments, and raw LiDAR.
To our best knowledge, we present the most faithful implementation of SAM in 3D, which may serve as a starting point for future research in promptable 3D segmentation.
arXiv Detail & Related papers (2024-08-29T17:59:45Z) - Point-SAM: Promptable 3D Segmentation Model for Point Clouds [25.98791840584803]
We propose a 3D promptable segmentation model (Point-SAM) focusing on point clouds.
Our approach utilizes a transformer-based method, extending SAM to the 3D domain.
Our model outperforms state-of-the-art models on several indoor and outdoor benchmarks.
arXiv Detail & Related papers (2024-06-25T17:28:03Z) - MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - SAI3D: Segment Any Instance in 3D Scenes [68.57002591841034]
We introduce SAI3D, a novel zero-shot 3D instance segmentation approach.
Our method partitions a 3D scene into geometric primitives, which are then progressively merged into 3D instance segmentations.
Empirical evaluations on ScanNet, Matterport3D and the more challenging ScanNet++ datasets demonstrate the superiority of our approach.
arXiv Detail & Related papers (2023-12-17T09:05:47Z) - SAM3D: Segment Anything in 3D Scenes [33.57040455422537]
We propose a novel framework that is able to predict masks in 3D point clouds by leveraging the Segment-Anything Model (SAM) in RGB images without further training or finetuning.
For a point cloud of a 3D scene with posed RGB images, we first predict segmentation masks of RGB images with SAM, and then project the 2D masks into the 3D points.
Our approach is experimented with ScanNet dataset and qualitative results demonstrate that our SAM3D achieves reasonable and fine-grained 3D segmentation results without any training or finetuning.
arXiv Detail & Related papers (2023-06-06T17:59:51Z) - SAD: Segment Any RGBD [54.24917975958583]
The Segment Anything Model (SAM) has demonstrated its effectiveness in segmenting any part of 2D RGB images.
We propose the Segment Any RGBD (SAD) model, which is specifically designed to extract geometry information directly from images.
arXiv Detail & Related papers (2023-05-23T16:26:56Z) - Segment Anything in 3D with Radiance Fields [83.14130158502493]
This paper generalizes the Segment Anything Model (SAM) to segment 3D objects.
We refer to the proposed solution as SA3D, short for Segment Anything in 3D.
We show in experiments that SA3D adapts to various scenes and achieves 3D segmentation within seconds.
arXiv Detail & Related papers (2023-04-24T17:57:15Z) - SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM [6.172995387355581]
We introduce Segment Any Medical Model (SAMM), an extension of SAM on 3D Slicer.
SAMM achieves 0.6-second latency of a complete cycle and can infer image masks in nearly real-time.
arXiv Detail & Related papers (2023-04-12T05:39:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.