AquaSAM: Underwater Image Foreground Segmentation
- URL: http://arxiv.org/abs/2308.04218v1
- Date: Tue, 8 Aug 2023 12:30:36 GMT
- Title: AquaSAM: Underwater Image Foreground Segmentation
- Authors: Muduo Xu, Jianhao Su, Yutao Liu
- Abstract summary: This work presents AquaSAM, the first attempt to extend the success of SAM on underwater images.
We develop a straightforward fine-tuning method to adapt SAM to general foreground underwater image segmentation.
We demonstrate that AquaSAM outperforms the default SAM model especially at hard tasks like coral reefs.
- Score: 1.7482936568887284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Segment Anything Model (SAM) has revolutionized natural image
segmentation, nevertheless, its performance on underwater images is still
restricted. This work presents AquaSAM, the first attempt to extend the success
of SAM on underwater images with the purpose of creating a versatile method for
the segmentation of various underwater targets. To achieve this, we begin by
classifying and extracting various labels automatically in SUIM dataset.
Subsequently, we develop a straightforward fine-tuning method to adapt SAM to
general foreground underwater image segmentation. Through extensive experiments
involving eight segmentation tasks like human divers, we demonstrate that
AquaSAM outperforms the default SAM model especially at hard tasks like coral
reefs. AquaSAM achieves an average Dice Similarity Coefficient (DSC) of 7.13
(%) improvement and an average of 8.27 (%) on mIoU improvement in underwater
segmentation tasks.
Related papers
- Evaluation of Segment Anything Model 2: The Role of SAM2 in the Underwater Environment [2.0554501265326794]
The Segment Anything Model (SAM) and its extensions have been attempted for applications in various underwater visualization tasks in marine sciences.
Recently, Meta has developed the Segment Anything Model 2 (SAM2), which significantly improves running speed and segmentation accuracy.
This report aims to explore the potential of SAM2 in marine science by evaluating it on the underwater instance segmentation datasets benchmark UIIS and USIS10K.
arXiv Detail & Related papers (2024-08-06T03:20:10Z) - RobustSAM: Segment Anything Robustly on Degraded Images [19.767828436963317]
Segment Anything Model (SAM) has emerged as a transformative approach in image segmentation.
We propose the Robust Segment Anything Model (RobustSAM), which enhances SAM's performance on low-quality images.
Our method has been shown to effectively improve the performance of SAM-based downstream tasks such as single image dehazing and deblurring.
arXiv Detail & Related papers (2024-06-13T23:33:59Z) - Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset [60.14089302022989]
Underwater vision tasks often suffer from low segmentation accuracy due to the complex underwater circumstances.
We construct the first large-scale underwater salient instance segmentation dataset (USIS10K)
We propose an Underwater Salient Instance architecture based on Segment Anything Model (USIS-SAM) specifically for the underwater domain.
arXiv Detail & Related papers (2024-06-10T06:17:33Z) - MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - Moving Object Segmentation: All You Need Is SAM (and Flow) [82.78026782967959]
We investigate two models for combining SAM with optical flow that harness the segmentation power of SAM with the ability of flow to discover and group moving objects.
In the first model, we adapt SAM to take optical flow, rather than RGB, as an input. In the second, SAM takes RGB as an input, and flow is used as a segmentation prompt.
These surprisingly simple methods, without any further modifications, outperform all previous approaches by a considerable margin in both single and multi-object benchmarks.
arXiv Detail & Related papers (2024-04-18T17:59:53Z) - Fantastic Animals and Where to Find Them: Segment Any Marine Animal with Dual SAM [62.85895749882285]
Marine Animal (MAS) involves segmenting animals within marine environments.
We propose a novel feature learning framework, named Dual-SAM for high-performance MAS.
Our proposed method achieves state-of-the-art performances on five widely-used MAS datasets.
arXiv Detail & Related papers (2024-04-07T15:34:40Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - When SAM Meets Sonar Images [6.902760999492406]
Segment Anything Model (SAM) has revolutionized the way of segmentation.
SAM's performance may decline when applied to tasks involving domains that differ from natural images.
By employing fine-tuning techniques, SAM exhibits promising capabilities in specific domains, such as medicine and planetary science.
arXiv Detail & Related papers (2023-06-25T03:15:14Z) - Personalize Segment Anything Model with One Shot [52.54453744941516]
We propose a training-free Personalization approach for Segment Anything Model (SAM)
Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior.
PerSAM segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.
arXiv Detail & Related papers (2023-05-04T17:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.