Translating multispectral imagery to nighttime imagery via conditional
generative adversarial networks
- URL: http://arxiv.org/abs/2001.05848v1
- Date: Sat, 28 Dec 2019 03:20:29 GMT
- Title: Translating multispectral imagery to nighttime imagery via conditional
generative adversarial networks
- Authors: Xiao Huang, Dong Xu, Zhenlong Li, Cuizhen Wang
- Abstract summary: This study explores the potential of conditional Generative Adversarial Networks (cGAN) in translating multispectral imagery to nighttime imagery.
A popular cGAN framework, pix2pix, was adopted and modified to facilitate this translation.
With the additional social media data, the generated nighttime imagery can be very similar to the ground-truth imagery.
- Score: 24.28488767429697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nighttime satellite imagery has been applied in a wide range of fields.
However, our limited understanding of how observed light intensity is formed
and whether it can be simulated greatly hinders its further application. This
study explores the potential of conditional Generative Adversarial Networks
(cGAN) in translating multispectral imagery to nighttime imagery. A popular
cGAN framework, pix2pix, was adopted and modified to facilitate this
translation using gridded training image pairs derived from Landsat 8 and
Visible Infrared Imaging Radiometer Suite (VIIRS). The results of this study
prove the possibility of multispectral-to-nighttime translation and further
indicate that, with the additional social media data, the generated nighttime
imagery can be very similar to the ground-truth imagery. This study fills the
gap in understanding the composition of satellite observed nighttime light and
provides new paradigms to solve the emerging problems in nighttime remote
sensing fields, including nighttime series construction, light desaturation,
and multi-sensor calibration.
Related papers
- Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Nighttime Thermal Infrared Image Colorization with Feedback-based Object
Appearance Learning [27.58748298687474]
We propose a generative adversarial network incorporating feedback-based object appearance learning (FoalGAN)
FoalGAN is effective for appearance learning of small objects, but also outperforms other image translation methods in terms of semantic preservation and edge consistency.
arXiv Detail & Related papers (2023-10-24T09:59:55Z) - Disentangled Contrastive Image Translation for Nighttime Surveillance [87.03178320662592]
Nighttime surveillance suffers from degradation due to poor illumination and arduous human annotations.
Existing methods rely on multi-spectral images to perceive objects in the dark, which are troubled by low resolution and color absence.
We argue that the ultimate solution for nighttime surveillance is night-to-day translation, or Night2Day.
This paper contributes a new surveillance dataset called NightSuR. It includes six scenes to support the study on nighttime surveillance.
arXiv Detail & Related papers (2023-07-11T06:40:27Z) - Boosting Night-time Scene Parsing with Learnable Frequency [53.05778451012621]
Night-Time Scene Parsing (NTSP) is essential to many vision applications, especially for autonomous driving.
Most of the existing methods are proposed for day-time scene parsing.
We show that our method performs favorably against the state-of-the-art methods on the NightCity, NightCity+ and BDD100K-night datasets.
arXiv Detail & Related papers (2022-08-30T13:09:59Z) - Let There be Light: Improved Traffic Surveillance via Detail Preserving
Night-to-Day Transfer [19.33490492872067]
We propose a framework to alleviate the accuracy decline when object detection is taken to adverse conditions by using image translation method.
To alleviate the detail corruptions caused by Generative Adversarial Networks (GANs), we propose to utilize Kernel Prediction Network (KPN) based method to refine the nighttime to daytime image translation.
arXiv Detail & Related papers (2021-05-11T13:18:50Z) - Thermal Infrared Image Colorization for Nighttime Driving Scenes with
Top-Down Guided Attention [14.527765677864913]
We propose a toP-down attEntion And gRadient aLignment based GAN, referred to as PearlGAN.
A top-down guided attention module and an elaborate attentional loss are first designed to reduce the semantic encoding ambiguity during translation.
In addition, pixel-level annotation is carried out on a subset of FLIR and KAIST datasets to evaluate the semantic preservation performance of multiple translation methods.
arXiv Detail & Related papers (2021-04-29T14:35:25Z) - NightVision: Generating Nighttime Satellite Imagery from Infra-Red
Observations [0.6127835361805833]
This work presents how deep learning can be applied successfully to create visible images by using U-Net based architectures.
The proposed methods show promising results, achieving a structural similarity index (SSIM) up to 86% on an independent test set.
arXiv Detail & Related papers (2020-11-13T16:55:46Z) - Nighttime Dehazing with a Synthetic Benchmark [147.21955799938115]
We propose a novel synthetic method called 3R to simulate nighttime hazy images from daytime clear images.
We generate realistic nighttime hazy images by sampling real-world light colors from a prior empirical distribution.
Experiment results demonstrate their superiority over state-of-the-art methods in terms of both image quality and runtime.
arXiv Detail & Related papers (2020-08-10T02:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.