Domain-Aware Unsupervised Hyperspectral Reconstruction for Aerial Image
Dehazing
- URL: http://arxiv.org/abs/2011.03677v1
- Date: Sat, 7 Nov 2020 03:30:52 GMT
- Title: Domain-Aware Unsupervised Hyperspectral Reconstruction for Aerial Image
Dehazing
- Authors: Aditya Mehta, Harsh Sinha, Murari Mandal, Pratik Narang
- Abstract summary: We propose SkyGAN for haze removal in aerial images.
SkyGAN consists of 1) a domain-aware hazy-to-hyperspectral (H2H) module, and 2) a conditional GAN (cGAN) based multi-cue image-to-image translation module (I2I)
- Score: 16.190455993566864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Haze removal in aerial images is a challenging problem due to considerable
variation in spatial details and varying contrast. Changes in particulate
matter density often lead to degradation in visibility. Therefore, several
approaches utilize multi-spectral data as auxiliary information for haze
removal. In this paper, we propose SkyGAN for haze removal in aerial images.
SkyGAN consists of 1) a domain-aware hazy-to-hyperspectral (H2H) module, and 2)
a conditional GAN (cGAN) based multi-cue image-to-image translation module
(I2I) for dehazing. The proposed H2H module reconstructs several visual bands
from RGB images in an unsupervised manner, which overcomes the lack of hazy
hyperspectral aerial image datasets. The module utilizes task supervision and
domain adaptation in order to create a "hyperspectral catalyst" for image
dehazing. The I2I module uses the hyperspectral catalyst along with a
12-channel multi-cue input and performs effective image dehazing by utilizing
the entire visual spectrum. In addition, this work introduces a new dataset,
called Hazy Aerial-Image (HAI) dataset, that contains more than 65,000 pairs of
hazy and ground truth aerial images with realistic, non-homogeneous haze of
varying density. The performance of SkyGAN is evaluated on the recent
SateHaze1k dataset as well as the HAI dataset. We also present a comprehensive
evaluation of HAI dataset with a representative set of state-of-the-art
techniques in terms of PSNR and SSIM.
Related papers
- LMHaze: Intensity-aware Image Dehazing with a Large-scale Multi-intensity Real Haze Dataset [14.141433473509826]
We present LMHaze, a large-scale, high-quality real-world dataset.
LMHaze comprises paired hazy and haze-free images captured in diverse indoor and outdoor environments.
To better handle images with different haze intensities, we propose a mixture-of-experts model based on Mamba.
arXiv Detail & Related papers (2024-10-21T15:20:02Z) - Deep Multimodal Fusion for Semantic Segmentation of Remote Sensing Earth Observation Data [0.08192907805418582]
This paper proposes a late fusion deep learning model (LF-DLM) for semantic segmentation.
One branch integrates detailed textures from aerial imagery captured by UNetFormer with a Multi-Axis Vision Transformer (ViT) backbone.
The other branch captures complex-temporal dynamics from the Sentinel-2 satellite imageMax time series using a U-ViNet with Temporal Attention (U-TAE)
arXiv Detail & Related papers (2024-10-01T07:50:37Z) - HazeSpace2M: A Dataset for Haze Aware Single Image Dehazing [26.97153700921866]
This research introduces the HazeSpace2M dataset, a collection of over 2 million images designed to enhance dehazing through haze type classification.
Using the dataset, we introduce a technique of haze type classification followed by specialized dehazers to clear hazy images.
Our approach classifies haze types before applying type-specific dehazing, improving clarity in real-life hazy images.
arXiv Detail & Related papers (2024-09-25T23:47:25Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve
Aerial Visual Perception? [57.77643186237265]
We present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives.
MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes.
This makes MAVREC the largest ground and aerial-view dataset, and the fourth largest among all drone-based datasets.
arXiv Detail & Related papers (2023-12-07T18:59:14Z) - Object Detection in Hyperspectral Image via Unified Spectral-Spatial
Feature Aggregation [55.9217962930169]
We present S2ADet, an object detector that harnesses the rich spectral and spatial complementary information inherent in hyperspectral images.
S2ADet surpasses existing state-of-the-art methods, achieving robust and reliable results.
arXiv Detail & Related papers (2023-06-14T09:01:50Z) - Aerial Diffusion: Text Guided Ground-to-Aerial View Translation from a
Single Image using Diffusion Models [72.76182801289497]
We present a novel method, Aerial Diffusion, for generating aerial views from a single ground-view image using text guidance.
We address two main challenges corresponding to domain gap between the ground-view and the aerial view.
Aerial Diffusion is the first approach that performs ground-to-aerial translation in an unsupervised manner.
arXiv Detail & Related papers (2023-03-15T22:26:09Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training [3.5788754401889014]
We propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN) for single image de-hazing.
Our experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM.
arXiv Detail & Related papers (2020-08-15T02:43:00Z) - NH-HAZE: An Image Dehazing Benchmark with Non-Homogeneous Hazy and
Haze-Free Images [95.00583228823446]
NH-HAZE is a non-homogeneous realistic dataset with pairs of real hazy and corresponding haze-free images.
This work presents an objective assessment of several state-of-the-art single image dehazing methods that were evaluated using NH-HAZE dataset.
arXiv Detail & Related papers (2020-05-07T15:50:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.