Domain-Aware Unsupervised Hyperspectral Reconstruction for Aerial Image
Dehazing
- URL: http://arxiv.org/abs/2011.03677v1
- Date: Sat, 7 Nov 2020 03:30:52 GMT
- Title: Domain-Aware Unsupervised Hyperspectral Reconstruction for Aerial Image
Dehazing
- Authors: Aditya Mehta, Harsh Sinha, Murari Mandal, Pratik Narang
- Abstract summary: We propose SkyGAN for haze removal in aerial images.
SkyGAN consists of 1) a domain-aware hazy-to-hyperspectral (H2H) module, and 2) a conditional GAN (cGAN) based multi-cue image-to-image translation module (I2I)
- Score: 16.190455993566864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Haze removal in aerial images is a challenging problem due to considerable
variation in spatial details and varying contrast. Changes in particulate
matter density often lead to degradation in visibility. Therefore, several
approaches utilize multi-spectral data as auxiliary information for haze
removal. In this paper, we propose SkyGAN for haze removal in aerial images.
SkyGAN consists of 1) a domain-aware hazy-to-hyperspectral (H2H) module, and 2)
a conditional GAN (cGAN) based multi-cue image-to-image translation module
(I2I) for dehazing. The proposed H2H module reconstructs several visual bands
from RGB images in an unsupervised manner, which overcomes the lack of hazy
hyperspectral aerial image datasets. The module utilizes task supervision and
domain adaptation in order to create a "hyperspectral catalyst" for image
dehazing. The I2I module uses the hyperspectral catalyst along with a
12-channel multi-cue input and performs effective image dehazing by utilizing
the entire visual spectrum. In addition, this work introduces a new dataset,
called Hazy Aerial-Image (HAI) dataset, that contains more than 65,000 pairs of
hazy and ground truth aerial images with realistic, non-homogeneous haze of
varying density. The performance of SkyGAN is evaluated on the recent
SateHaze1k dataset as well as the HAI dataset. We also present a comprehensive
evaluation of HAI dataset with a representative set of state-of-the-art
techniques in terms of PSNR and SSIM.
Related papers
- DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve
Aerial Visual Perception? [57.77643186237265]
We present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives.
MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes.
This makes MAVREC the largest ground and aerial-view dataset, and the fourth largest among all drone-based datasets.
arXiv Detail & Related papers (2023-12-07T18:59:14Z) - Object Detection in Hyperspectral Image via Unified Spectral-Spatial
Feature Aggregation [55.9217962930169]
We present S2ADet, an object detector that harnesses the rich spectral and spatial complementary information inherent in hyperspectral images.
S2ADet surpasses existing state-of-the-art methods, achieving robust and reliable results.
arXiv Detail & Related papers (2023-06-14T09:01:50Z) - Aerial Diffusion: Text Guided Ground-to-Aerial View Translation from a
Single Image using Diffusion Models [72.76182801289497]
We present a novel method, Aerial Diffusion, for generating aerial views from a single ground-view image using text guidance.
We address two main challenges corresponding to domain gap between the ground-view and the aerial view.
Aerial Diffusion is the first approach that performs ground-to-aerial translation in an unsupervised manner.
arXiv Detail & Related papers (2023-03-15T22:26:09Z) - Multi-Modal Domain Fusion for Multi-modal Aerial View Object
Classification [4.438928487047433]
A novel Multi-Modal Domain Fusion(MDF) network is proposed to learn the domain invariant features from multi-modal data.
The network achieves top-10 performance in the Track-1 with an accuracy of 25.3 % and top-5 performance in Track-2 with an accuracy of 34.26 %.
arXiv Detail & Related papers (2022-12-14T05:14:02Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - RelationRS: Relationship Representation Network for Object Detection in
Aerial Images [15.269897893563417]
We propose a relationship representation network for object detection in aerial images (RelationRS)
The dual relationship module learns the potential relationship between features of different scales and learns the relationship between different scenes from different patches in a same iteration.
The bridging visual representations module (BVR) is introduced into the field of aerial images to improve the object detection effect in images with complex backgrounds.
arXiv Detail & Related papers (2021-10-13T14:02:33Z) - Dehaze-GLCGAN: Unpaired Single Image De-hazing via Adversarial Training [3.5788754401889014]
We propose a dehazing Global-Local Cycle-consistent Generative Adversarial Network (Dehaze-GLCGAN) for single image de-hazing.
Our experiments over three benchmark datasets show that our network outperforms previous work in terms of PSNR and SSIM.
arXiv Detail & Related papers (2020-08-15T02:43:00Z) - EAGLE: Large-scale Vehicle Detection Dataset in Real-World Scenarios
using Aerial Imagery [3.8902657229395894]
We introduce a large-scale dataset for multi-class vehicle detection with object orientation information in aerial imagery.
It features high-resolution aerial images composed of different real-world situations with a wide variety of camera sensor, resolution, flight altitude, weather, illumination, haze, shadow, time, city, country, occlusion, and camera angle.
It contains 215,986 instances annotated with oriented bounding boxes defined by four points and orientation, making it by far the largest dataset to date in this task.
It also supports researches on the haze and shadow removal as well as super-resolution and in-painting applications.
arXiv Detail & Related papers (2020-07-12T23:00:30Z) - NH-HAZE: An Image Dehazing Benchmark with Non-Homogeneous Hazy and
Haze-Free Images [95.00583228823446]
NH-HAZE is a non-homogeneous realistic dataset with pairs of real hazy and corresponding haze-free images.
This work presents an objective assessment of several state-of-the-art single image dehazing methods that were evaluated using NH-HAZE dataset.
arXiv Detail & Related papers (2020-05-07T15:50:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.