An U-Net-Based Deep Neural Network for Cloud Shadow and Sun-Glint Correction of Unmanned Aerial System (UAS) Imagery
- URL: http://arxiv.org/abs/2509.08949v1
- Date: Wed, 10 Sep 2025 19:19:25 GMT
- Title: An U-Net-Based Deep Neural Network for Cloud Shadow and Sun-Glint Correction of Unmanned Aerial System (UAS) Imagery
- Authors: Yibin Wang, Wondimagegn Beshah, Padmanava Dash, Haifeng Wang,
- Abstract summary: This study proposes a novel machine learning approach first to identify and extract regions with cloud shadows and sun glint.<n>The data was extracted from the images at pixel level to train an U-Net based deep learning model.<n>A high-quality image correction model was determined, which was used to recover the cloud shadow and sun glint areas in the images.
- Score: 8.771946849115439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of unmanned aerial systems (UASs) has increased tremendously in the current decade. They have significantly advanced remote sensing with the capability to deploy and image the terrain as per required spatial, spectral, temporal, and radiometric resolutions for various remote sensing applications. One of the major advantages of UAS imagery is that images can be acquired in cloudy conditions by flying the UAS under the clouds. The limitation to the technology is that the imagery is often sullied by cloud shadows. Images taken over water are additionally affected by sun glint. These are two pose serious issues for estimating water quality parameters from the UAS images. This study proposes a novel machine learning approach first to identify and extract regions with cloud shadows and sun glint and separate such regions from non-obstructed clear sky regions and sun-glint unaffected regions. The data was extracted from the images at pixel level to train an U-Net based deep learning model and best settings for model training was identified based on the various evaluation metrics from test cases. Using this evaluation, a high-quality image correction model was determined, which was used to recover the cloud shadow and sun glint areas in the images.
Related papers
- CBEN -- A Multimodal Machine Learning Dataset for Cloud Robust Remote Sensing Image Understanding [4.405830705915443]
Cloudless analysis is often performed where cloudy images are excluded from machine learning datasets and methods.<n>Cloud robust methods can be achieved by combining optical data with radar, a modality unaffected by clouds.<n>We show that state-of-the-art methods trained on combined clear-sky optical and radar imagery suffer performance drops of 23-33 percentage points when evaluated on cloudy images.
arXiv Detail & Related papers (2026-02-13T06:24:55Z) - SatFlow: Generative model based framework for producing High Resolution Gap Free Remote Sensing Imagery [0.0]
We present SatFlow, a generative model-based framework that fuses low-resolution MODIS imagery and Landsat observations to produce frequent, high-resolution, gap-free surface reflectance imagery.<n>Our model, trained via Conditional Flow Matching, demonstrates better performance in generating imagery with preserved structural and spectral integrity.<n>This capability is crucial for downstream applications such as crop phenology tracking, environmental change detection etc.
arXiv Detail & Related papers (2025-02-03T06:40:13Z) - Real-Time Multi-Scene Visibility Enhancement for Promoting Navigational Safety of Vessels Under Complex Weather Conditions [48.529493393948435]
The visible-light camera has emerged as an essential imaging sensor for marine surface vessels in intelligent waterborne transportation systems.
The visual imaging quality inevitably suffers from several kinds of degradations under complex weather conditions.
We develop a general-purpose multi-scene visibility enhancement method to restore degraded images captured under different weather conditions.
arXiv Detail & Related papers (2024-09-02T23:46:27Z) - Removing cloud shadows from ground-based solar imagery [0.33748750222488655]
We propose a new method to remove cloud shadows, based on a U-Net architecture, and compare classical supervision with conditional GAN.
We evaluate our method on two different imaging modalities, using both real images and a new dataset of synthetic clouds.
arXiv Detail & Related papers (2024-07-18T10:38:24Z) - Few-shot point cloud reconstruction and denoising via learned Guassian splats renderings and fine-tuned diffusion features [52.62053703535824]
We propose a method to reconstruct point clouds from few images and to denoise point clouds from their rendering.
To improve reconstruction in constraint settings, we regularize the training of a differentiable with hybrid surface and appearance.
We demonstrate how these learned filters can be used to remove point cloud noise coming without 3D supervision.
arXiv Detail & Related papers (2024-04-01T13:38:16Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN [60.257791714663725]
We propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes.
The proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
arXiv Detail & Related papers (2022-12-23T03:00:28Z) - Boosting Point Clouds Rendering via Radiance Mapping [49.24193509772339]
We focus on boosting the image quality of point clouds rendering with a compact model design.
We simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel.
Our method achieves the state-of-the-art rendering on point clouds, outperforming prior works by notable margins.
arXiv Detail & Related papers (2022-10-27T01:25:57Z) - Seeing Through Clouds in Satellite Images [14.84582204034532]
This paper presents a neural-network-based solution to recover pixels occluded by clouds in satellite images.
We leverage radio frequency (RF) signals in the ultra/super-high frequency band that penetrate clouds to help reconstruct the occluded regions in multispectral images.
arXiv Detail & Related papers (2021-06-15T20:01:27Z) - Non-Homogeneous Haze Removal via Artificial Scene Prior and
Bidimensional Graph Reasoning [52.07698484363237]
We propose a Non-Homogeneous Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph reasoning.
Our method achieves superior performance over many state-of-the-art algorithms for both the single image dehazing and hazy image understanding tasks.
arXiv Detail & Related papers (2021-04-05T13:04:44Z) - Cloud removal in remote sensing images using generative adversarial
networks and SAR-to-optical image translation [0.618778092044887]
Cloud removal has received much attention due to the wide range of satellite image applications.
In this study, we attempt to solve the problem using two generative adversarial networks (GANs)
The first translates SAR images into optical images, and the second removes clouds using the translated images of prior GAN.
arXiv Detail & Related papers (2020-12-22T17:19:14Z) - Cloud and Cloud Shadow Segmentation for Remote Sensing Imagery via
Filtered Jaccard Loss Function and Parametric Augmentation [8.37609145576126]
Current methods for cloud/shadow identification in geospatial imagery are not as accurate as they should, especially in the presence of snow and haze.
This paper presents a deep learning-based framework for the detection of cloud/shadow in Landsat 8 images.
arXiv Detail & Related papers (2020-01-23T19:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.