Lighting the Night with Generative Artificial Intelligence
- URL: http://arxiv.org/abs/2506.22511v2
- Date: Fri, 11 Jul 2025 12:03:06 GMT
- Title: Lighting the Night with Generative Artificial Intelligence
- Authors: Tingting Zhou, Feng Zhang, Haoyang Fu, Baoxiang Pan, Renhe Zhang, Feng Lu, Zhixin Yang,
- Abstract summary: Due to the lack of visible light at night, it is impossible to conduct continuous all-day weather observations using visible light reflectance data.<n>We developed a high-precision visible light reflectance generative model, called RefDiff, which enables 0.47mumathrmm, 0.65mumathrmm, and 0.825mumathrmm bands visible light reflectance generation at night.
- Score: 12.565202991911411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The visible light reflectance data from geostationary satellites is crucial for meteorological observations and plays an important role in weather monitoring and forecasting. However, due to the lack of visible light at night, it is impossible to conduct continuous all-day weather observations using visible light reflectance data. This study pioneers the use of generative diffusion models to address this limitation. Based on the multi-band thermal infrared brightness temperature data from the Advanced Geostationary Radiation Imager (AGRI) onboard the Fengyun-4B (FY4B) geostationary satellite, we developed a high-precision visible light reflectance generative model, called Reflectance Diffusion (RefDiff), which enables 0.47~\mu\mathrm{m}, 0.65~\mu\mathrm{m}, and 0.825~\mu\mathrm{m} bands visible light reflectance generation at night. Compared to the classical models, RefDiff not only significantly improves accuracy through ensemble averaging but also provides uncertainty estimation. Specifically, the SSIM index of RefDiff can reach 0.90, with particularly significant improvements in areas with complex cloud structures and thick clouds. The model's nighttime generation capability was validated using VIIRS nighttime product, demonstrating comparable performance to its daytime counterpart. In summary, this research has made substantial progress in the ability to generate visible light reflectance at night, with the potential to expand the application of nighttime visible light data.
Related papers
- The Devil is in the Darkness: Diffusion-Based Nighttime Dehazing Anchored in Brightness Perception [58.895000127068194]
We introduce the Diffusion-Based Nighttime Dehazing framework, which excels in both data synthesis and lighting reconstruction.<n>We propose a restoration model that integrates a pre-trained diffusion model guided by a brightness perception network.<n>Experiments validate our dataset's utility and the model's superior performance in joint haze removal and brightness mapping.
arXiv Detail & Related papers (2025-06-03T03:21:13Z) - DiffSR: Learning Radar Reflectivity Synthesis via Diffusion Model from Satellite Observations [42.635670495018964]
We propose a two-stage diffusion-based method called DiffSR to generate high-frequency details and high-value areas.
Our method achieves state-of-the-art (SOTA) results, demonstrating the ability to generate high-frequency details and high-value areas.
arXiv Detail & Related papers (2024-11-11T04:50:34Z) - A Semi-supervised Nighttime Dehazing Baseline with Spatial-Frequency Aware and Realistic Brightness Constraint [19.723367790947684]
We propose a semi-supervised model for real-world nighttime dehazing.
First, the spatial attention and frequency spectrum filtering are implemented as a spatial-frequency domain information interaction module.
Second, a pseudo-label-based retraining strategy and a local window-based brightness loss for semi-supervised training process is designed to suppress haze and glow.
arXiv Detail & Related papers (2024-03-27T13:27:02Z) - Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - Simulating Nighttime Visible Satellite Imagery of Tropical Cyclones Using Conditional Generative Adversarial Networks [10.76837828367292]
Visible (VIS) imagery is important for monitoring Tropical Cyclones (TCs) but is unavailable at night.<n>This study presents a Conditional Generative Adversarial Networks (CGAN) model to generate nighttime VIS imagery.
arXiv Detail & Related papers (2024-01-22T03:44:35Z) - From Generation to Suppression: Towards Effective Irregular Glow Removal
for Nighttime Visibility Enhancement [22.565044107631696]
Existing Low-Light Image Enhancement (LLIE) methods are primarily designed to improve brightness in dark regions, which suffer from severe degradation in nighttime images.
These methods have limited exploration in another major visibility damage, the glow effects in real night scenes.
We propose a new method for learning physical glow generation via multiple scattering estimation according to the Atmospheric Point Spread Function (APSF)
The proposed method is based on zero-shot learning and does not rely on any paired or unpaired training data. Empirical evaluations demonstrate the effectiveness of the proposed method in both glow suppression and low-light enhancement tasks.
arXiv Detail & Related papers (2023-07-31T15:51:15Z) - Flare7K++: Mixing Synthetic and Real Datasets for Nighttime Flare
Removal and Beyond [77.72043833102191]
We introduce the first comprehensive nighttime flare removal dataset, consisting of 962 real-captured flare images (Flare-R) and 7,000 synthetic flares (Flare7K)
Compared to Flare7K, Flare7K++ is particularly effective in eliminating complicated degradation around the light source, which is intractable by using synthetic flares alone.
To address this issue, we additionally provide the annotations of light sources in Flare7K++ and propose a new end-to-end pipeline to preserve the light source while removing lens flares.
arXiv Detail & Related papers (2023-06-07T08:27:44Z) - Nighttime Smartphone Reflective Flare Removal Using Optical Center
Symmetry Prior [81.64647648269889]
Reflective flare is a phenomenon that occurs when light reflects inside lenses, causing bright spots or a "ghosting effect" in photos.
We propose an optical center symmetry prior, which suggests that the reflective flare and light source are always symmetrical around the lens's optical center.
We create the first reflective flare removal dataset called BracketFlare, which contains diverse and realistic reflective flare patterns.
arXiv Detail & Related papers (2023-03-27T09:44:40Z) - Flare7K: A Phenomenological Nighttime Flare Removal Dataset [83.38205781536578]
We introduce Flare7K, the first nighttime flare removal dataset.
It offers 5,000 scattering and 2,000 reflective flare images, consisting of 25 types of scattering flares and 10 types of reflective flares.
With the paired data, we can train deep models to restore flare-corrupted images taken in the real world effectively.
arXiv Detail & Related papers (2022-10-12T20:17:24Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Predicting Landsat Reflectance with Deep Generative Fusion [2.867517731896504]
Public satellite missions are commonly bound to a trade-off between spatial and temporal resolution.
This hinders their potential to assist vegetation monitoring or humanitarian actions.
We probe the potential of deep generative models to produce high-resolution optical imagery by fusing products with different spatial and temporal characteristics.
arXiv Detail & Related papers (2020-11-09T21:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.