CrossDehaze: Scaling Up Image Dehazing with Cross-Data Vision Alignment and Augmentation
- URL: http://arxiv.org/abs/2407.14823v1
- Date: Sat, 20 Jul 2024 10:00:20 GMT
- Title: CrossDehaze: Scaling Up Image Dehazing with Cross-Data Vision Alignment and Augmentation
- Authors: Yukai Shi, Zhipeng Weng, Yupei Lin, Cidan Shi, Xiaojun Yang, Liang Lin,
- Abstract summary: Methods based on priors and deep learning have been proposed to address the task of image dehazing.
We propose a novel method of internal and external data augmentation to improve the existing dehazing methodology.
Our approach significantly outperforms other advanced methods in dehazing and produces dehazed images that are closest to real haze-free images.
- Score: 47.425906124301775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, as computer vision tasks have increasingly relied on high-quality image inputs, the task of image dehazing has received significant attention. Previously, many methods based on priors and deep learning have been proposed to address the task of image dehazing. Ignoring the domain gap between different data, former de-hazing methods usually adopt multiple datasets for explicit training, which often makes the methods themselves be violated. To address this problem, we propose a novel method of internal and external data augmentation to improve the existing dehazing methodology. By using cross-data external augmentor. The dataset inherits samples from different domains that are firmly aligned, making the model learn more robust and generalizable features. By using the internal data augmentation method, the model can fully exploit local information within the images, thereby obtaining more image details. To demonstrate the effectiveness of our proposed method, we conduct training on both the Natural Image Dataset (NID) and the Remote Sensing Image Dataset (RSID). Experimental results show that our method clearly resolves the domain gap in different dehazing datasets and presents a new pipeline for joint training in the dehazing task. Our approach significantly outperforms other advanced methods in dehazing and produces dehazed images that are closest to real haze-free images. The code will be available at: https://github.com/wengzp1/ScaleUpDehazing
Related papers
- Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Image Data Augmentation for Deep Learning: A Survey [8.817690876855728]
We systematically review different image data augmentation methods.
We propose a taxonomy of reviewed methods and present the strengths and limitations of these methods.
We also conduct extensive experiments with various data augmentation methods on three typical computer vision tasks.
arXiv Detail & Related papers (2022-04-19T02:05:56Z) - Single Image Dehazing with An Independent Detail-Recovery Network [117.86146907611054]
We propose a single image dehazing method with an independent Detail Recovery Network (DRN)
The DRN aims to recover the dehazed image details through local and global branches respectively.
Our method outperforms the state-of-the-art dehazing methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-09-22T02:49:43Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote
Sensing Data [64.40187171234838]
Seasonal Contrast (SeCo) is an effective pipeline to leverage unlabeled data for in-domain pre-training of re-mote sensing representations.
SeCo will be made public to facilitate transfer learning and enable rapid progress in re-mote sensing applications.
arXiv Detail & Related papers (2021-03-30T18:26:39Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - Machine learning with limited data [1.2183405753834562]
We study few shot image classification, in which we only have very few labeled data.
One method is to augment image features by mixing the style of these images.
The second method is applying spatial attention to explore the relations between patches of images.
arXiv Detail & Related papers (2021-01-18T17:10:39Z) - Six-channel Image Representation for Cross-domain Object Detection [17.854940064699985]
Deep learning models are data-driven and the excellent performance is highly dependent on the abundant and diverse datasets.
Some image-to-image translation techniques are employed to generate some fake data of some specific scenes to train the models.
We propose to inspire the original 3-channel images and their corresponding GAN-generated fake images to form 6-channel representations of the dataset.
arXiv Detail & Related papers (2021-01-03T04:50:03Z) - Robust Data Hiding Using Inverse Gradient Attention [82.73143630466629]
In the data hiding task, each pixel of cover images should be treated differently since they have divergent tolerabilities.
We propose a novel deep data hiding scheme with Inverse Gradient Attention (IGA), combing the ideas of adversarial learning and attention mechanism.
Empirically, extensive experiments show that the proposed model outperforms the state-of-the-art methods on two prevalent datasets.
arXiv Detail & Related papers (2020-11-21T19:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.