Unpaired Quad-Path Cycle Consistent Adversarial Networks for Single
Image Defogging
- URL: http://arxiv.org/abs/2202.09553v3
- Date: Thu, 27 Apr 2023 02:30:20 GMT
- Title: Unpaired Quad-Path Cycle Consistent Adversarial Networks for Single
Image Defogging
- Authors: Wei Liu, Cheng Chen, Rui Jiang, Tao Lu and Zixiang Xiong
- Abstract summary: We develop a novel generative adversarial network, called quad-path cycle consistent adversarial network (QPC-Net) for single image defogging.
QPC-Net consists of a Fog2Fogfree block and a Fogfree2Fog block.
We show that QPC-Net outperforms state-of-the-art defogging methods in terms of quantitative accuracy and subjective visual quality.
- Score: 16.59494337699748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial learning-based image defogging methods have been extensively
studied in computer vision due to their remarkable performance. However, most
existing methods have limited defogging capabilities for real cases because
they are trained on the paired clear and synthesized foggy images of the same
scenes. In addition, they have limitations in preserving vivid color and rich
textual details in defogging. To address these issues, we develop a novel
generative adversarial network, called quad-path cycle consistent adversarial
network (QPC-Net), for single image defogging. QPC-Net consists of a
Fog2Fogfree block and a Fogfree2Fog block. In each block, there are three
learning-based modules, namely, fog removal, color-texture recovery, and fog
synthetic, which sequentially compose dual-path that constrain each other to
generate high quality images. Specifically, the color-texture recovery model is
designed to exploit the self-similarity of texture and structure information by
learning the holistic channel-spatial feature correlations between the foggy
image with its several derived images. Moreover, in the fog synthetic module,
we utilize the atmospheric scattering model to guide it to improve the
generative quality by focusing on an atmospheric light optimization with a
novel sky segmentation network. Extensive experiments on both synthetic and
real-world datasets show that QPC-Net outperforms state-of-the-art defogging
methods in terms of quantitative accuracy and subjective visual quality.
Related papers
- Toward Scalable Image Feature Compression: A Content-Adaptive and Diffusion-Based Approach [44.03561901593423]
This paper introduces a content-adaptive diffusion model for scalable image compression.
The proposed method encodes fine textures through a diffusion process, enhancing perceptual quality.
Experiments demonstrate the effectiveness of the proposed framework in both image reconstruction and downstream machine vision tasks.
arXiv Detail & Related papers (2024-10-08T15:48:34Z) - Multi-Scale Texture Loss for CT denoising with GANs [0.9349653765341301]
Generative Adversarial Networks (GANs) have proved as a powerful framework for denoising applications in medical imaging.
This work presents a loss function that leverages the intrinsic multi-scale nature of the Gray-Level-Co-occurrence Matrix (GLCM)
Our approach also introduces a self-attention layer that dynamically aggregates the multi-scale texture information extracted from the images.
arXiv Detail & Related papers (2024-03-25T11:28:52Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - DiffCR: A Fast Conditional Diffusion Framework for Cloud Removal from
Optical Satellite Images [27.02507384522271]
This paper presents a novel framework called DiffCR, which leverages conditional guided diffusion with deep convolutional networks for high-performance cloud removal for optical satellite imagery.
We introduce a decoupled encoder for conditional image feature extraction, providing a robust color representation to ensure the close similarity of appearance information between the conditional input and the synthesized output.
arXiv Detail & Related papers (2023-08-08T17:34:28Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Dual-Scale Single Image Dehazing Via Neural Augmentation [29.019279446792623]
A novel single image dehazing algorithm is introduced by combining model-based and data-driven approaches.
Results indicate that the proposed algorithm can remove haze well from real-world and synthetic hazy images.
arXiv Detail & Related papers (2022-09-13T11:56:03Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Efficient and Model-Based Infrared and Visible Image Fusion Via
Algorithm Unrolling [24.83209572888164]
Infrared and visible image fusion (IVIF) expects to obtain images that retain thermal radiation information from infrared images and texture details from visible images.
A model-based convolutional neural network (CNN) model is proposed to overcome the shortcomings of traditional CNN-based IVIF models.
arXiv Detail & Related papers (2020-05-12T16:15:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.