Progressive Update Guided Interdependent Networks for Single Image
Dehazing
- URL: http://arxiv.org/abs/2008.01701v4
- Date: Wed, 7 Jun 2023 17:28:39 GMT
- Title: Progressive Update Guided Interdependent Networks for Single Image
Dehazing
- Authors: Aupendu Kar, Sobhan Kanti Dhara, Debashis Sen, Prabir Kumar Biswas
- Abstract summary: Images with haze of different varieties often pose a significant challenge to dehazing.
We propose a multi-network dehazing framework containing novel interdependent dehazing and haze parameter updater networks.
- Score: 24.565068569913382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images with haze of different varieties often pose a significant challenge to
dehazing. Therefore, guidance by estimates of haze parameters related to the
variety would be beneficial, and their progressive update jointly with haze
reduction will allow effective dehazing. To this end, we propose a
multi-network dehazing framework containing novel interdependent dehazing and
haze parameter updater networks that operate in a progressive manner. The haze
parameters, transmission map and atmospheric light, are first estimated using
dedicated convolutional networks that allow color-cast handling. The estimated
parameters are then used to guide our dehazing module, where the estimates are
progressively updated by novel convolutional networks. The updating takes place
jointly with progressive dehazing using a network that invokes inter-step
dependencies. The joint progressive updating and dehazing gradually modify the
haze parameter values toward achieving effective dehazing. Through different
studies, our dehazing framework is shown to be more effective than
image-to-image mapping and predefined haze formation model based dehazing. The
framework is also found capable of handling a wide variety of hazy conditions
wtih different types and amounts of haze and color casts. Our dehazing
framework is qualitatively and quantitatively found to outperform the
state-of-the-art on synthetic and real-world hazy images of multiple datasets
with varied haze conditions.
Related papers
- Deep Variational Bayesian Modeling of Haze Degradation Process [9.978089704770646]
We introduce a variational Bayesian framework for single image dehazing.
Based on a physical model for haze degradation, our framework leads to a new objective function.
Our framework can be seamlessly incorporated with other existing dehazing networks.
arXiv Detail & Related papers (2024-12-04T22:24:37Z) - DRACO-DehazeNet: An Efficient Image Dehazing Network Combining Detail Recovery and a Novel Contrastive Learning Paradigm [3.649619954898362]
Detail Recovery And Contrastive DehazeNet is a detailed image recovery network that tailors enhancements to specific dehazed scene contexts.
A major innovation is its ability to train effectively with limited data, achieved through a novel quadruplet loss-based contrastive dehazing paradigm.
arXiv Detail & Related papers (2024-10-18T16:48:31Z) - MultiDiff: Consistent Novel View Synthesis from a Single Image [60.04215655745264]
MultiDiff is a novel approach for consistent novel view synthesis of scenes from a single RGB image.
Our results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet.
arXiv Detail & Related papers (2024-06-26T17:53:51Z) - PANet: A Physics-guided Parametric Augmentation Net for Image Dehazing by Hazing [33.39324790342096]
A huge domain gap between synthetic and real-world haze images degrades dehazing performance in practical settings.
We propose a Physics-guided Parametric Augmentation Network (PANet) that generates photo-realistic hazy and clean training pairs.
Our experimental results demonstrate that PANet can augment diverse realistic hazy images to enrich existing hazy image benchmarks.
arXiv Detail & Related papers (2024-04-14T14:24:13Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Cross-domain Compositing with Pretrained Diffusion Models [34.98199766006208]
We employ a localized, iterative refinement scheme which infuses the injected objects with contextual information derived from the background scene.
Our method produces higher quality and realistic results without requiring any annotations or training.
arXiv Detail & Related papers (2023-02-20T18:54:04Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - Auto-regressive Image Synthesis with Integrated Quantization [55.51231796778219]
This paper presents a versatile framework for conditional image generation.
It incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression.
Our method achieves superior diverse image generation performance as compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-21T22:19:17Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.