AdapterShadow: Adapting Segment Anything Model for Shadow Detection
- URL: http://arxiv.org/abs/2311.08891v1
- Date: Wed, 15 Nov 2023 11:51:10 GMT
- Title: AdapterShadow: Adapting Segment Anything Model for Shadow Detection
- Authors: Leiping Jie and Hui Zhang
- Abstract summary: Segment anything model (SAM) has shown its spectacular performance in segmenting universal objects.
However, it fails to segment specific targets, e.g., shadow images or lesions in medical images.
We propose AdapterShadow, which adapts SAM model for shadow detection.
- Score: 6.201928340999525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segment anything model (SAM) has shown its spectacular performance in
segmenting universal objects, especially when elaborate prompts are provided.
However, the drawback of SAM is twofold. On the first hand, it fails to segment
specific targets, e.g., shadow images or lesions in medical images. On the
other hand, manually specifying prompts is extremely time-consuming. To
overcome the problems, we propose AdapterShadow, which adapts SAM model for
shadow detection. To adapt SAM for shadow images, trainable adapters are
inserted into the frozen image encoder of SAM, since the training of the full
SAM model is both time and memory consuming. Moreover, we introduce a novel
grid sampling method to generate dense point prompts, which helps to
automatically segment shadows without any manual interventions. Extensive
experiments are conducted on four widely used benchmark datasets to demonstrate
the superior performance of our proposed method. Codes will are publicly
available at https://github.com/LeipingJie/AdapterShadow.
Related papers
- SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation [51.90445260276897]
We prove that the Segment Anything Model 2 (SAM2) can be a strong encoder for U-shaped segmentation models.
We propose a simple but effective framework, termed SAM2-UNet, for versatile image segmentation.
arXiv Detail & Related papers (2024-08-16T17:55:38Z) - MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - CAT-SAM: Conditional Tuning for Few-Shot Adaptation of Segment Anything Model [90.26396410706857]
This paper presents CAT-SAM, a ConditionAl Tuning network that adapts SAM toward various unconventional target tasks.
CAT-SAM freezes the entire SAM and adapts its mask decoder and image encoder simultaneously with a small number of learnable parameters.
Cat-SAM variants achieve superior target segmentation performance consistently even under the very challenging one-shot adaptation setup.
arXiv Detail & Related papers (2024-02-06T02:00:18Z) - PA-SAM: Prompt Adapter SAM for High-Quality Image Segmentation [19.65118388712439]
We introduce a novel prompt-driven adapter into SAM, namely Prompt Adapter Segment Anything Model (PA-SAM)
By exclusively training the prompt adapter, PA-SAM extracts detailed information from images and optimize the mask decoder feature at both sparse and dense prompt levels.
Experimental results demonstrate that our PA-SAM outperforms other SAM-based methods in high-quality, zero-shot, and open-set segmentation.
arXiv Detail & Related papers (2024-01-23T19:20:22Z) - Beyond Adapting SAM: Towards End-to-End Ultrasound Image Segmentation via Auto Prompting [10.308637269138146]
We propose SAMUS as a universal model tailored for ultrasound image segmentation.
We further enable it to work in an end-to-end manner denoted as AutoSAMUS.
AutoSAMUS is realized by introducing an auto prompt generator (APG) to replace the manual prompt encoder of SAMUS.
arXiv Detail & Related papers (2023-09-13T09:15:20Z) - AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt
Encoder [101.28268762305916]
In this work, we replace Segment Anything Model with an encoder that operates on the same input image.
We obtain state-of-the-art results on multiple medical images and video benchmarks.
For inspecting the knowledge within it, and providing a lightweight segmentation solution, we also learn to decode it into a mask by a shallow deconvolution network.
arXiv Detail & Related papers (2023-06-10T07:27:00Z) - SAM-helps-Shadow:When Segment Anything Model meet shadow removal [8.643096072885909]
In this study, we innovatively adapted the SAM (Segment anything model) for shadow removal by introducing SAM-helps-Shadow.
Our approach utilized the model's detection results as a potent prior for facilitating shadow detection, followed by shadow removal using a second-order deep unfolding network.
arXiv Detail & Related papers (2023-06-01T06:37:19Z) - Detect Any Shadow: Segment Anything for Video Shadow Detection [105.19693622157462]
We propose ShadowSAM, a framework for fine-tuning segment anything model (SAM) to detect shadows.
By combining it with long short-term attention mechanism, we extend its capability for efficient video shadow detection.
Our method exhibits accelerated inference speed compared to previous video shadow detection approaches.
arXiv Detail & Related papers (2023-05-26T07:39:10Z) - When SAM Meets Shadow Detection [2.9324443830722973]
We try segment anything model (SAM) on an unexplored popular task: shadow detection.
Experiments show that the performance for shadow detection using SAM is not satisfactory, especially when comparing with the elaborate models.
arXiv Detail & Related papers (2023-05-19T08:26:08Z) - Personalize Segment Anything Model with One Shot [52.54453744941516]
We propose a training-free Personalization approach for Segment Anything Model (SAM)
Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior.
PerSAM segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.
arXiv Detail & Related papers (2023-05-04T17:59:36Z) - SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in
Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and
More [13.047310918166762]
We propose textbfSAM-Adapter, which incorporates domain-specific information or visual prompts into the segmentation network by using simple yet effective adapters.
We can even outperform task-specific network models and achieve state-of-the-art performance in the task we tested: camouflaged object detection.
arXiv Detail & Related papers (2023-04-18T17:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.