SAM-Deblur: Let Segment Anything Boost Image Deblurring
- URL: http://arxiv.org/abs/2309.02270v2
- Date: Sun, 17 Dec 2023 17:39:20 GMT
- Title: SAM-Deblur: Let Segment Anything Boost Image Deblurring
- Authors: Siwei Li, Mingxuan Liu, Yating Zhang, Shu Chen, Haoxiang Li, Zifei Dou
and Hong Chen
- Abstract summary: We propose a framework SAM-Deblur, integrating prior knowledge from the Segment Anything Model (SAM) into the deblurring task.
Experimental results on the RealBlurJ, ReloBlur, and REDS datasets reveal that incorporating our methods improves GoPro-trained NAFNet's PSNR by 0.05, 0.96, and 7.03, respectively.
- Score: 21.964258084389243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image deblurring is a critical task in the field of image restoration, aiming
to eliminate blurring artifacts. However, the challenge of addressing
non-uniform blurring leads to an ill-posed problem, which limits the
generalization performance of existing deblurring models. To solve the problem,
we propose a framework SAM-Deblur, integrating prior knowledge from the Segment
Anything Model (SAM) into the deblurring task for the first time. In
particular, SAM-Deblur is divided into three stages. First, we preprocess the
blurred images, obtain segment masks via SAM, and propose a mask dropout method
for training to enhance model robustness. Then, to fully leverage the
structural priors generated by SAM, we propose a Mask Average Pooling (MAP)
unit specifically designed to average SAM-generated segmented areas, serving as
a plug-and-play component which can be seamlessly integrated into existing
deblurring networks. Finally, we feed the fused features generated by the MAP
Unit into the deblurring model to obtain a sharp image. Experimental results on
the RealBlurJ, ReloBlur, and REDS datasets reveal that incorporating our
methods improves GoPro-trained NAFNet's PSNR by 0.05, 0.96, and 7.03,
respectively. Project page is available at GitHub
\href{https://hplqaq.github.io/projects/sam-deblur}{HPLQAQ/SAM-Deblur}.
Related papers
- PointSAM: Pointly-Supervised Segment Anything Model for Remote Sensing Images [16.662173255725463]
We propose a novel Pointly-supervised Segment Anything Model named PointSAM.
We conduct experiments on RSI datasets, including WHU, HRSID, and NWPU VHR-10.
The results show that our method significantly outperforms direct testing with SAM, SAM2, and other comparison methods.
arXiv Detail & Related papers (2024-09-20T11:02:18Z) - FocSAM: Delving Deeply into Focused Objects in Segmenting Anything [58.042354516491024]
The Segment Anything Model (SAM) marks a notable milestone in segmentation models.
We propose FocSAM with a pipeline redesigned on two pivotal aspects.
First, we propose Dynamic Window Multi-head Self-Attention (Dwin-MSA) to dynamically refocus SAM's image embeddings on the target object.
Second, we propose Pixel-wise Dynamic ReLU (P-DyReLU) to enable sufficient integration of interactive information from a few initial clicks.
arXiv Detail & Related papers (2024-05-29T02:34:13Z) - MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - Fantastic Animals and Where to Find Them: Segment Any Marine Animal with Dual SAM [62.85895749882285]
Marine Animal (MAS) involves segmenting animals within marine environments.
We propose a novel feature learning framework, named Dual-SAM for high-performance MAS.
Our proposed method achieves state-of-the-art performances on five widely-used MAS datasets.
arXiv Detail & Related papers (2024-04-07T15:34:40Z) - WSI-SAM: Multi-resolution Segment Anything Model (SAM) for histopathology whole-slide images [8.179859593451285]
We present WSI-SAM, enhancing Segment Anything Model (SAM) with precise object segmentation capabilities for histopathology images.
To fully exploit pretrained knowledge while minimizing training overhead, we keep SAM frozen, introducing only minimal extra parameters.
Our model outperforms SAM by 4.1 and 2.5 percent points on a ductal carcinoma in situ (DCIS) segmentation tasks and breast cancer metastasis segmentation task.
arXiv Detail & Related papers (2024-03-14T10:30:43Z) - PA-SAM: Prompt Adapter SAM for High-Quality Image Segmentation [19.65118388712439]
We introduce a novel prompt-driven adapter into SAM, namely Prompt Adapter Segment Anything Model (PA-SAM)
By exclusively training the prompt adapter, PA-SAM extracts detailed information from images and optimize the mask decoder feature at both sparse and dense prompt levels.
Experimental results demonstrate that our PA-SAM outperforms other SAM-based methods in high-quality, zero-shot, and open-set segmentation.
arXiv Detail & Related papers (2024-01-23T19:20:22Z) - BA-SAM: Scalable Bias-Mode Attention Mask for Segment Anything Model [65.92173280096588]
We address the challenge of image resolution variation for the Segment Anything Model (SAM)
SAM, known for its zero-shot generalizability, exhibits a performance degradation when faced with datasets with varying image sizes.
We present a bias-mode attention mask that allows each token to prioritize neighboring information.
arXiv Detail & Related papers (2024-01-04T15:34:44Z) - TomoSAM: a 3D Slicer extension using SAM for tomography segmentation [62.997667081978825]
TomoSAM has been developed to integrate the cutting-edge Segment Anything Model (SAM) into 3D Slicer.
SAM is a promptable deep learning model that is able to identify objects and create image masks in a zero-shot manner.
The synergy between these tools aids in the segmentation of complex 3D datasets from tomography or other imaging techniques.
arXiv Detail & Related papers (2023-06-14T16:13:27Z) - DeSAM: Decoupled Segment Anything Model for Generalizable Medical Image Segmentation [22.974876391669685]
Segment Anything Model (SAM) shows potential for improving the cross-domain robustness of medical image segmentation.
SAM performs significantly worse in automatic segmentation scenarios than when manually prompted.
Decoupled SAM modifies SAM's mask decoder by introducing two new modules.
arXiv Detail & Related papers (2023-06-01T09:49:11Z) - Personalize Segment Anything Model with One Shot [52.54453744941516]
We propose a training-free Personalization approach for Segment Anything Model (SAM)
Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior.
PerSAM segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.
arXiv Detail & Related papers (2023-05-04T17:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.