Creating synthetic meteorology satellite visible light images during
night based on GAN method
- URL: http://arxiv.org/abs/2108.04330v1
- Date: Wed, 21 Jul 2021 16:05:26 GMT
- Title: Creating synthetic meteorology satellite visible light images during
night based on GAN method
- Authors: CHENG Wencong (1) ((1) Beijing Aviation Meteorological Institute)
- Abstract summary: We propose a method based on deep learning to create synthetic satellite visible light images during night.
Specifically, we train a Generative Adversarial Networks (GAN) model to generate visible light images.
Experiments based on ECMWF NWP products and FY-4A meteorology satellite visible light and infrared channels date show that the proposed methods can be effective.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meteorology satellite visible light images is critical for meteorology
support and forecast. However, there is no such kind of data during night time.
To overcome this, we propose a method based on deep learning to create
synthetic satellite visible light images during night. Specifically, to produce
more realistic products, we train a Generative Adversarial Networks (GAN) model
to generate visible light images given the corresponding satellite infrared
images and numerical weather prediction(NWP) products. To better model the
nonlinear relationship from infrared data and NWP products to visible light
images, we propose to use the channel-wise attention mechanics, e.g., SEBlock
to quantitative weight the input channels. The experiments based on the ECMWF
NWP products and FY-4A meteorology satellite visible light and infrared
channels date show that the proposed methods can be effective to create
realistic synthetic satellite visible light images during night.
Related papers
- CrossViewDiff: A Cross-View Diffusion Model for Satellite-to-Street View Synthesis [54.852701978617056]
CrossViewDiff is a cross-view diffusion model for satellite-to-street view synthesis.
To address the challenges posed by the large discrepancy across views, we design the satellite scene structure estimation and cross-view texture mapping modules.
To achieve a more comprehensive evaluation of the synthesis results, we additionally design a GPT-based scoring method.
arXiv Detail & Related papers (2024-08-27T03:41:44Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Simulating Nighttime Visible Satellite Imagery of Tropical Cyclones
Using Conditional Generative Adversarial Networks [10.76837828367292]
This study presents a Conditional Generative Adversarial Networks (CGAN) model that generates highly accurate nighttime visible reflectance.
The model was trained and validated using target area observations of the Advanced Himawari Imager (AHI) in the daytime.
This study also presents the first nighttime model validation using the Day/Night Band (DNB) of the Visible/Infrared Imager Radiometer Suite (VIIRS)
arXiv Detail & Related papers (2024-01-22T03:44:35Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - NeRF applied to satellite imagery for surface reconstruction [5.027411102165872]
We present Surf-NeRF, a modified implementation of the recently introduced Shadow Neural Radiance Field (S-NeRF) model.
This method is able to synthesize novel views from a sparse set of satellite images of a scene, while accounting for the variation in lighting present in the pictures.
The trained model can also be used to accurately estimate the surface elevation of the scene, which is often a desirable quantity for satellite observation applications.
arXiv Detail & Related papers (2023-04-09T01:37:13Z) - Attention-Based Scattering Network for Satellite Imagery [0.0]
We leverage the scattering to extract high-level features without additional trainable parameters.
Experiments show promising results on estimating tropical cyclone intensity and predicting the occurrence of lightning from satellite imagery.
arXiv Detail & Related papers (2022-10-21T18:25:34Z) - Sat-NeRF: Learning Multi-View Satellite Photogrammetry With Transient
Objects and Shadow Modeling Using RPC Cameras [10.269997499911668]
We introduce the Satellite Neural Radiance Field (Sat-NeRF), a new end-to-end model for learning multi-view satellite photogram in the wild.
Sat-NeRF combines some of the latest trends in neural rendering with native satellite camera models.
We evaluate Sat-NeRF using WorldView-3 images from different locations and stress the advantages of applying a bundle adjustment to the satellite camera models prior to training.
arXiv Detail & Related papers (2022-03-16T19:18:46Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - Spatially-Varying Outdoor Lighting Estimation from Intrinsics [66.04683041837784]
We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation.
We generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information.
Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
arXiv Detail & Related papers (2021-04-09T02:28:54Z) - NightVision: Generating Nighttime Satellite Imagery from Infra-Red
Observations [0.6127835361805833]
This work presents how deep learning can be applied successfully to create visible images by using U-Net based architectures.
The proposed methods show promising results, achieving a structural similarity index (SSIM) up to 86% on an independent test set.
arXiv Detail & Related papers (2020-11-13T16:55:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.