Can SAM Segment Anything? When SAM Meets Camouflaged Object Detection
- URL: http://arxiv.org/abs/2304.04709v2
- Date: Tue, 11 Apr 2023 03:53:13 GMT
- Title: Can SAM Segment Anything? When SAM Meets Camouflaged Object Detection
- Authors: Lv Tang, Haoke Xiao, Bo Li
- Abstract summary: SAM is a segmentation model recently released by Meta AI Research.
We try to ask if SAM can address the camouflage object detection (COD) task and evaluate the performance of SAM on the COD benchmark.
We also compare SAM's performance with 22 state-of-the-art COD methods.
- Score: 8.476593072868056
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: SAM is a segmentation model recently released by Meta AI Research and has
been gaining attention quickly due to its impressive performance in generic
object segmentation. However, its ability to generalize to specific scenes such
as camouflaged scenes is still unknown. Camouflaged object detection (COD)
involves identifying objects that are seamlessly integrated into their
surroundings and has numerous practical applications in fields such as
medicine, art, and agriculture. In this study, we try to ask if SAM can address
the COD task and evaluate the performance of SAM on the COD benchmark by
employing maximum segmentation evaluation and camouflage location evaluation.
We also compare SAM's performance with 22 state-of-the-art COD methods. Our
results indicate that while SAM shows promise in generic object segmentation,
its performance on the COD task is limited. This presents an opportunity for
further research to explore how to build a stronger SAM that may address the
COD task. The results of this paper are provided in
\url{https://github.com/luckybird1994/SAMCOD}.
Related papers
- SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation [51.90445260276897]
We prove that the Segment Anything Model 2 (SAM2) can be a strong encoder for U-shaped segmentation models.
We propose a simple but effective framework, termed SAM2-UNet, for versatile image segmentation.
arXiv Detail & Related papers (2024-08-16T17:55:38Z) - Evaluating SAM2's Role in Camouflaged Object Detection: From SAM to SAM2 [10.751277821864916]
Report reveals a decline in SAM2's ability to perceive different objects in images without prompts in its auto mode.
Specifically, we employ the challenging task of camouflaged object detection to assess this performance decrease.
arXiv Detail & Related papers (2024-07-31T13:32:10Z) - MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - Moving Object Segmentation: All You Need Is SAM (and Flow) [82.78026782967959]
We investigate two models for combining SAM with optical flow that harness the segmentation power of SAM with the ability of flow to discover and group moving objects.
In the first model, we adapt SAM to take optical flow, rather than RGB, as an input. In the second, SAM takes RGB as an input, and flow is used as a segmentation prompt.
These surprisingly simple methods, without any further modifications, outperform all previous approaches by a considerable margin in both single and multi-object benchmarks.
arXiv Detail & Related papers (2024-04-18T17:59:53Z) - VRP-SAM: SAM with Visual Reference Prompt [73.05676082695459]
We propose a novel Visual Reference Prompt (VRP) encoder that empowers the Segment Anything Model (SAM) to utilize annotated reference images as prompts for segmentation.
In essence, VRP-SAM can utilize annotated reference images to comprehend specific objects and perform segmentation of specific objects in target image.
arXiv Detail & Related papers (2024-02-27T17:58:09Z) - TinySAM: Pushing the Envelope for Efficient Segment Anything Model [76.21007576954035]
We propose a framework to obtain a tiny segment anything model (TinySAM) while maintaining the strong zero-shot performance.
We first propose a full-stage knowledge distillation method with hard prompt sampling and hard mask weighting strategy to distill a lightweight student model.
We also adapt the post-training quantization to the promptable segmentation task and further reduce the computational cost.
arXiv Detail & Related papers (2023-12-21T12:26:11Z) - When SAM Meets Shadow Detection [2.9324443830722973]
We try segment anything model (SAM) on an unexplored popular task: shadow detection.
Experiments show that the performance for shadow detection using SAM is not satisfactory, especially when comparing with the elaborate models.
arXiv Detail & Related papers (2023-05-19T08:26:08Z) - Personalize Segment Anything Model with One Shot [52.54453744941516]
We propose a training-free Personalization approach for Segment Anything Model (SAM)
Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior.
PerSAM segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.
arXiv Detail & Related papers (2023-05-04T17:59:36Z) - Segment anything, from space? [8.126645790463266]
"Segment Anything Model" (SAM) can segment objects in input imagery based on cheap input prompts.
SAM usually achieved recognition accuracy similar to, or sometimes exceeding, vision models that had been trained on the target tasks.
We examine whether SAM's performance extends to overhead imagery problems and help guide the community's response to its development.
arXiv Detail & Related papers (2023-04-25T17:14:36Z) - Can SAM Count Anything? An Empirical Study on SAM Counting [35.42720382193184]
We explore the use of the Segment Anything model (SAM) for the challenging task of few-shot object counting.
We find that SAM's performance is unsatisfactory without further fine-tuning, particularly for small and crowded objects.
arXiv Detail & Related papers (2023-04-21T08:59:48Z) - SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in
Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and
More [13.047310918166762]
We propose textbfSAM-Adapter, which incorporates domain-specific information or visual prompts into the segmentation network by using simple yet effective adapters.
We can even outperform task-specific network models and achieve state-of-the-art performance in the task we tested: camouflaged object detection.
arXiv Detail & Related papers (2023-04-18T17:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.