Deshadow-Anything: When Segment Anything Model Meets Zero-shot shadow
removal
- URL: http://arxiv.org/abs/2309.11715v3
- Date: Wed, 3 Jan 2024 02:01:09 GMT
- Title: Deshadow-Anything: When Segment Anything Model Meets Zero-shot shadow
removal
- Authors: Xiao Feng Zhang, Tian Yi Song, Jia Wei Yao
- Abstract summary: We develop Deshadow-Anything, considering the generalization of large-scale datasets, to achieve image shadow removal.
The diffusion model can diffuse along the edges and textures of an image, helping to remove shadows while preserving the details of the image.
Experiments on shadow removal tasks demonstrate that these methods can effectively improve image restoration performance.
- Score: 8.555176637147648
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segment Anything (SAM), an advanced universal image segmentation model
trained on an expansive visual dataset, has set a new benchmark in image
segmentation and computer vision. However, it faced challenges when it came to
distinguishing between shadows and their backgrounds. To address this, we
developed Deshadow-Anything, considering the generalization of large-scale
datasets, and we performed Fine-tuning on large-scale datasets to achieve image
shadow removal. The diffusion model can diffuse along the edges and textures of
an image, helping to remove shadows while preserving the details of the image.
Furthermore, we design Multi-Self-Attention Guidance (MSAG) and adaptive input
perturbation (DDPM-AIP) to accelerate the iterative training speed of
diffusion. Experiments on shadow removal tasks demonstrate that these methods
can effectively improve image restoration performance.
Related papers
- UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation [64.01742988773745]
An increasing privacy concern exists regarding training large-scale image segmentation models on unauthorized private data.
We exploit the concept of unlearnable examples to make images unusable to model training by generating and adding unlearnable noise into the original images.
We empirically verify the effectiveness of UnSeg across 6 mainstream image segmentation tasks, 10 widely used datasets, and 7 different network architectures.
arXiv Detail & Related papers (2024-10-13T16:34:46Z) - Soft-Hard Attention U-Net Model and Benchmark Dataset for Multiscale Image Shadow Removal [2.999888908665659]
This study proposes a novel deep learning architecture, named Soft-Hard Attention U-net (SHAU), focusing on multiscale shadow removal.
It provides a novel synthetic dataset, named Multiscale Shadow Removal dataset (MSRD), containing complex shadow patterns of multiple scales.
The results demonstrate the effectiveness of SHAU over the relevant state-of-the-art shadow removal methods across various benchmark datasets.
arXiv Detail & Related papers (2024-08-07T12:42:06Z) - Single-Image Shadow Removal Using Deep Learning: A Comprehensive Survey [78.84004293081631]
The patterns of shadows are arbitrary, varied, and often have highly complex trace structures.
The degradation caused by shadows is spatially non-uniform, resulting in inconsistencies in illumination and color between shadow and non-shadow areas.
Recent developments in this field are primarily driven by deep learning-based solutions.
arXiv Detail & Related papers (2024-07-11T20:58:38Z) - Shadow Generation for Composite Image Using Diffusion model [16.316311264197324]
We resort to foundation model with rich prior knowledge of natural shadow images.
We first adapt ControlNet to our task and then propose intensity modulation modules to improve the shadow intensity.
Experimental results on both DESOBA and DESOBAv2 datasets as well as real composite images demonstrate the superior capability of our model for shadow generation task.
arXiv Detail & Related papers (2024-03-22T14:27:58Z) - Latent Feature-Guided Diffusion Models for Shadow Removal [50.02857194218859]
We propose the use of diffusion models as they offer a promising approach to gradually refine the details of shadow regions during the diffusion process.
Our method improves this process by conditioning on a learned latent feature space that inherits the characteristics of shadow-free images.
We demonstrate the effectiveness of our approach which outperforms the previous best method by 13% in terms of RMSE on the AISTD dataset.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - Progressive Recurrent Network for Shadow Removal [99.1928825224358]
Single-image shadow removal is a significant task that is still unresolved.
Most existing deep learning-based approaches attempt to remove the shadow directly, which can not deal with the shadow well.
We propose a simple but effective Progressive Recurrent Network (PRNet) to remove the shadow progressively.
arXiv Detail & Related papers (2023-11-01T11:42:45Z) - Learning Physical-Spatio-Temporal Features for Video Shadow Removal [42.95422940263425]
We propose the first data-driven video shadow removal model, termedNet, by exploiting three essential characteristics of video shadows.
Specifically, dedicated physical branch was established to conduct local illumination estimation, which is more applicable for scenes with complex lighting textures.
To tackle the lack of datasets paired of shadow videos, we synthesize a dataset with aid of the popular game GTAV by controlling the switch of the shadow.
arXiv Detail & Related papers (2023-03-16T14:55:31Z) - Leveraging Inpainting for Single-Image Shadow Removal [29.679542372017373]
In this work, we find that pretraining shadow removal networks on the image inpainting dataset can reduce the shadow remnants significantly.
A naive encoder-decoder network gets competitive restoration quality w.r.t. the state-of-the-art methods via only 10% shadow & shadow-free image pairs.
Inspired by these observations, we formulate shadow removal as an adaptive fusion task that takes advantage of both shadow removal and image inpainting.
arXiv Detail & Related papers (2023-02-10T16:21:07Z) - ShadowDiffusion: When Degradation Prior Meets Diffusion Model for Shadow
Removal [74.86415440438051]
We propose a unified diffusion framework that integrates both the image and degradation priors for highly effective shadow removal.
Our model achieves a significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB over SRD dataset.
arXiv Detail & Related papers (2022-12-09T07:48:30Z) - Self-Supervised Shadow Removal [130.6657167667636]
We propose an unsupervised single image shadow removal solution via self-supervised learning by using a conditioned mask.
In contrast to existing literature, we do not require paired shadowed and shadow-free images, instead we rely on self-supervision and jointly learn deep models to remove and add shadows to images.
arXiv Detail & Related papers (2020-10-22T11:33:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.