NightVision: Generating Nighttime Satellite Imagery from Infra-Red
Observations
- URL: http://arxiv.org/abs/2011.07017v2
- Date: Tue, 8 Dec 2020 15:43:52 GMT
- Title: NightVision: Generating Nighttime Satellite Imagery from Infra-Red
Observations
- Authors: Paula Harder, William Jones, Redouane Lguensat, Shahine Bouabid, James
Fulton, D\'anell Quesada-Chac\'on, Aris Marcolongo, Sofija Stefanovi\'c,
Yuhan Rao, Peter Manshausen, Duncan Watson-Parris
- Abstract summary: This work presents how deep learning can be applied successfully to create visible images by using U-Net based architectures.
The proposed methods show promising results, achieving a structural similarity index (SSIM) up to 86% on an independent test set.
- Score: 0.6127835361805833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent explosion in applications of machine learning to satellite imagery
often rely on visible images and therefore suffer from a lack of data during
the night. The gap can be filled by employing available infra-red observations
to generate visible images. This work presents how deep learning can be applied
successfully to create those images by using U-Net based architectures. The
proposed methods show promising results, achieving a structural similarity
index (SSIM) up to 86\% on an independent test set and providing visually
convincing output images, generated from infra-red observations.
Related papers
- Weakly-supervised Camera Localization by Ground-to-satellite Image Registration [52.54992898069471]
We propose a weakly supervised learning strategy for ground-to-satellite image registration.
It derives positive and negative satellite images for each ground image.
We also propose a self-supervision strategy for cross-view image relative rotation estimation.
arXiv Detail & Related papers (2024-09-10T12:57:16Z) - Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - Detecting Images Generated by Diffusers [12.986394431694206]
We consider images generated from captions in the MSCOCO and Wikimedia datasets using two state-of-the-art models: Stable Diffusion and GLIDE.
Our experiments show that it is possible to detect the generated images using simple Multi-Layer Perceptrons.
We find that incorporating the associated textual information with the images rarely leads to significant improvement in detection results.
arXiv Detail & Related papers (2023-03-09T14:14:29Z) - Convolutional Neural Processes for Inpainting Satellite Images [56.032183666893246]
Inpainting involves predicting what is missing based on the known pixels and is an old problem in image processing.
We show ConvvNPs can outperform classical methods and state-of-the-art deep learning inpainting models on a scanline inpainting problem for LANDSAT 7 satellite images.
arXiv Detail & Related papers (2022-05-24T23:29:04Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Creating synthetic meteorology satellite visible light images during
night based on GAN method [0.0]
We propose a method based on deep learning to create synthetic satellite visible light images during night.
Specifically, we train a Generative Adversarial Networks (GAN) model to generate visible light images.
Experiments based on ECMWF NWP products and FY-4A meteorology satellite visible light and infrared channels date show that the proposed methods can be effective.
arXiv Detail & Related papers (2021-07-21T16:05:26Z) - Generation of the NIR spectral Band for Satellite Images with
Convolutional Neural Networks [0.0]
Deep neural networks allow generating artificial spectral information, such as for the image colorization problem.
We study the generative adversarial network (GAN) approach in the task of the NIR band generation using just RGB channels of high-resolution satellite imagery.
arXiv Detail & Related papers (2021-06-13T15:14:57Z) - Cross-Spectral Periocular Recognition with Conditional Adversarial
Networks [59.17685450892182]
We propose Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra.
We obtain a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.
arXiv Detail & Related papers (2020-08-26T15:02:04Z) - The color out of space: learning self-supervised representations for
Earth Observation imagery [10.019106184219515]
We propose to learn meaningful representations from satellite imagery, leveraging its high-dimensionality spectral bands to reconstruct visible colors.
We conduct experiments on land cover classification (BigEarthNet) and West Nile Virus detection, showing that colorization is a solid pretext task for training a feature extractor.
arXiv Detail & Related papers (2020-06-22T10:21:36Z) - Translating multispectral imagery to nighttime imagery via conditional
generative adversarial networks [24.28488767429697]
This study explores the potential of conditional Generative Adversarial Networks (cGAN) in translating multispectral imagery to nighttime imagery.
A popular cGAN framework, pix2pix, was adopted and modified to facilitate this translation.
With the additional social media data, the generated nighttime imagery can be very similar to the ground-truth imagery.
arXiv Detail & Related papers (2019-12-28T03:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.