Detecting Blurred Ground-based Sky/Cloud Images
- URL: http://arxiv.org/abs/2110.09764v1
- Date: Tue, 19 Oct 2021 06:51:31 GMT
- Title: Detecting Blurred Ground-based Sky/Cloud Images
- Authors: Mayank Jain, Navya Jain, Yee Hui Lee, Stefan Winkler, and Soumyabrata
Dev
- Abstract summary: We propose an efficient framework that can identify the blurred sky/cloud images.
Using a static external marker, our proposed methodology has a detection accuracy of 94%.
- Score: 5.178207785968322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ground-based whole sky imagers (WSIs) are being used by researchers in
various fields to study the atmospheric events. These ground-based sky cameras
capture visible-light images of the sky at regular intervals of time. Owing to
the atmospheric interference and camera sensor noise, the captured images often
exhibit noise and blur. This may pose a problem in subsequent image processing
stages. Therefore, it is important to accurately identify the blurred images.
This is a difficult task, as clouds have varying shapes, textures, and soft
edges whereas the sky acts as a homogeneous and uniform background. In this
paper, we propose an efficient framework that can identify the blurred
sky/cloud images. Using a static external marker, our proposed methodology has
a detection accuracy of 94\%. To the best of our knowledge, our approach is the
first of its kind in the automatic identification of blurred images for
ground-based sky/cloud images.
Related papers
- Precise Forecasting of Sky Images Using Spatial Warping [12.042758147684822]
We introduce a deep learning method to predict a future sky image frame with higher resolution than previous methods.
Our main contribution is to derive an optimal warping method to counter the adverse affects of clouds at the horizon.
arXiv Detail & Related papers (2024-09-18T17:25:42Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - See Blue Sky: Deep Image Dehaze Using Paired and Unpaired Training
Images [73.23687409870656]
We propose a cycle generative adversarial network to construct a novel end-to-end image dehaze model.
We adopt outdoor image datasets to train our model, which includes a set of real-world unpaired image dataset and a set of paired image dataset.
Based on the cycle structure, our model adds four different kinds of loss function to constrain the effect including adversarial loss, cycle consistency loss, photorealism loss and paired L1 loss.
arXiv Detail & Related papers (2022-10-14T07:45:33Z) - Structure Representation Network and Uncertainty Feedback Learning for
Dense Non-Uniform Fog Removal [64.77435210892041]
We introduce a structure-representation network with uncertainty feedback learning.
Specifically, we extract the feature representations from a pre-trained Vision Transformer (DINO-ViT) module to recover the background information.
To handle the intractability of estimating the atmospheric light colors, we exploit the grayscale version of our input image.
arXiv Detail & Related papers (2022-10-06T17:10:57Z) - Learning to restore images degraded by atmospheric turbulence using
uncertainty [93.72048616001064]
Atmospheric turbulence can significantly degrade the quality of images acquired by long-range imaging systems.
We propose a deep learning-based approach for restring a single image degraded by atmospheric turbulence.
arXiv Detail & Related papers (2022-07-07T17:24:52Z) - Deep Learning Eliminates Massive Dust Storms from Images of Tianwen-1 [24.25089331365282]
We propose an approach that reuses the image dehazing knowledge obtained on Earth to resolve the dust-removal problem on Mars.
Inspired by the haze formation process on Earth, we formulate a similar visual degradation process on clean images.
We train a deep model that inherently encodes dust irrelevant features and decodes them into dust-free images.
arXiv Detail & Related papers (2022-06-21T07:05:09Z) - Both Style and Fog Matter: Cumulative Domain Adaptation for Semantic
Foggy Scene Understanding [63.99301797430936]
We propose a new pipeline to cumulatively adapt style, fog and the dual-factor (style and fog)
Specifically, we devise a unified framework to disentangle the style factor and the fog factor separately, and then the dual-factor from images in different domains.
Our method achieves the state-of-the-art performance on three benchmarks and shows generalization ability in rainy and snowy scenes.
arXiv Detail & Related papers (2021-12-01T13:21:20Z) - Robust Glare Detection: Review, Analysis, and Dataset Release [6.281101654856357]
Sun Glare widely exists in the images captured by unmanned ground and aerial vehicles performing in outdoor environments.
The source of glare is not limited to the sun, and glare can be seen in the images captured during the nighttime and in indoor environments.
This research aims to introduce the first dataset for glare detection, which includes images captured by different cameras.
arXiv Detail & Related papers (2021-10-12T13:46:33Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Adherent Mist and Raindrop Removal from a Single Image Using Attentive
Convolutional Network [1.2891210250935146]
Temperature difference-induced mist adhered to the glass, such as windshield, camera lens, is often inhomogeneous and obscure.
In this work, we newly present a problem of image degradation caused by adherent mist and raindrops.
An attentive convolutional network is adopted to visually remove the adherent mist and raindrop from a single image.
arXiv Detail & Related papers (2020-09-03T06:17:53Z) - Sky Optimization: Semantically aware image processing of skies in
low-light photography [26.37385679374474]
We propose an automated method, which can run as a part of a camera pipeline, for creating accurate sky alpha-masks.
Our method performs end-to-end sky optimization in less than half a second per image on a mobile device.
arXiv Detail & Related papers (2020-06-15T20:19:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.