Deep learning-based identification of precipitation clouds from all-sky camera data for observatory safety
- URL: http://arxiv.org/abs/2503.18670v1
- Date: Mon, 24 Mar 2025 13:40:51 GMT
- Title: Deep learning-based identification of precipitation clouds from all-sky camera data for observatory safety
- Authors: Mohammad H. Zhoolideh Haghighi, Alireza Ghasrimanesh, Habib Khosroshahi,
- Abstract summary: We apply a deep-learning approach for automating the identification of precipitation clouds in all-sky camera data as a cloud warning system.<n>We construct our original training and test sets using the all-sky camera image archive of the Iranian National Observatory.<n>Our trained model can be deployed for real-time analysis, enabling the rapid identification of potential threats.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: For monitoring the night sky conditions, wide-angle all-sky cameras are used in most astronomical observatories to monitor the sky cloudiness. In this manuscript, we apply a deep-learning approach for automating the identification of precipitation clouds in all-sky camera data as a cloud warning system. We construct our original training and test sets using the all-sky camera image archive of the Iranian National Observatory (INO). The training and test set images are labeled manually based on their potential rainfall and their distribution in the sky. We train our model on a set of roughly 2445 images taken by the INO all-sky camera through the deep learning method based on the EfficientNet network. Our model reaches an average accuracy of 99\% in determining the cloud rainfall's potential and an accuracy of 96\% for cloud coverage. To enable a comprehensive comparison and evaluate the performance of alternative architectures for the task, we additionally trained three models LeNet, DeiT, and AlexNet. This approach can be used for early warning of incoming dangerous clouds toward telescopes and harnesses the power of deep learning to automatically analyze vast amounts of all-sky camera data and accurately identify precipitation clouds formations. Our trained model can be deployed for real-time analysis, enabling the rapid identification of potential threats, and offering a scaleable solution that can improve our ability to safeguard telescopes and instruments in observatories. This is important now that numerous small and medium-sized telescopes are increasingly integrated with smart control systems to reduce manual operation.
Related papers
- UCloudNet: A Residual U-Net with Deep Supervision for Cloud Image Segmentation [10.797462947568954]
We introduce a residual U-Net with deep supervision for cloud segmentation.<n>It provides better accuracy than previous approaches, and with less training consumption.
arXiv Detail & Related papers (2025-01-11T05:15:24Z) - 3D Cloud reconstruction through geospatially-aware Masked Autoencoders [1.4124182346539256]
This study leverages geostationary imagery from MSG/SEVIRI and radar reflectivity measurements of cloud profiles from CloudSat/CPR to reconstruct 3D cloud structures.<n>We first apply self-supervised learning (SSL) methods-Masked Autoencoders (MAE) and geospatially-aware SatMAE on unlabelled MSG images, and then fine-tune our models on matched image-profile pairs.
arXiv Detail & Related papers (2025-01-03T12:26:04Z) - DNN-based 3D Cloud Retrieval for Variable Solar Illumination and Multiview Spaceborne Imaging [2.6968321526169508]
We introduce the first scalable deep neural network-based system for 3D cloud retrieval.
By integrating multiview cloud intensity images with camera poses and solar direction data, we achieve greater flexibility in recovery.
arXiv Detail & Related papers (2024-11-07T13:13:23Z) - Few-shot point cloud reconstruction and denoising via learned Guassian splats renderings and fine-tuned diffusion features [52.62053703535824]
We propose a method to reconstruct point clouds from few images and to denoise point clouds from their rendering.
To improve reconstruction in constraint settings, we regularize the training of a differentiable with hybrid surface and appearance.
We demonstrate how these learned filters can be used to remove point cloud noise coming without 3D supervision.
arXiv Detail & Related papers (2024-04-01T13:38:16Z) - HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View Distillation [106.09886920774002]
We present a hybrid-view-based knowledge distillation framework, termed HVDistill, to guide the feature learning of a point cloud neural network.
Our method achieves consistent improvements over the baseline trained from scratch and significantly out- performs the existing schemes.
arXiv Detail & Related papers (2024-03-18T14:18:08Z) - Masked Spatio-Temporal Structure Prediction for Self-supervised Learning
on Point Cloud Videos [75.9251839023226]
We propose a Masked-temporal Structure Prediction (MaST-Pre) method to capture the structure of point cloud videos without human annotations.
MaST-Pre consists of two self-supervised learning tasks. First, by reconstructing masked point tubes, our method is able to capture appearance information of point cloud videos.
Second, to learn motion, we propose a temporal cardinality difference prediction task that estimates the change in the number of points within a point tube.
arXiv Detail & Related papers (2023-08-18T02:12:54Z) - UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical
Satellite Time Series [19.32220113046804]
We introduce UnCRtainTS, a method for multi-temporal cloud removal combining a novel attention-based architecture.
We show how the well-calibrated predicted uncertainties enable a precise control of the reconstruction quality.
arXiv Detail & Related papers (2023-04-11T19:27:18Z) - Unsupervised Point Cloud Representation Learning with Deep Neural
Networks: A Survey [104.71816962689296]
Unsupervised point cloud representation learning has attracted increasing attention due to the constraint in large-scale point cloud labelling.
This paper provides a comprehensive review of unsupervised point cloud representation learning using deep neural networks.
arXiv Detail & Related papers (2022-02-28T07:46:05Z) - Stereo Matching by Self-supervision of Multiscopic Vision [65.38359887232025]
We propose a new self-supervised framework for stereo matching utilizing multiple images captured at aligned camera positions.
A cross photometric loss, an uncertainty-aware mutual-supervision loss, and a new smoothness loss are introduced to optimize the network.
Our model obtains better disparity maps than previous unsupervised methods on the KITTI dataset.
arXiv Detail & Related papers (2021-04-09T02:58:59Z) - Cloud detection machine learning algorithms for PROBA-V [6.950862982117125]
The objective of the algorithms presented in this paper is to detect clouds accurately providing a cloud flag per pixel.
The effectiveness of the proposed method is successfully illustrated using a large number of real Proba-V images.
arXiv Detail & Related papers (2020-12-09T18:23:59Z) - Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion [51.19260542887099]
We show that self-supervision can be used to learn accurate depth and ego-motion estimation without prior knowledge of the camera model.
Inspired by the geometric model of Grossberg and Nayar, we introduce Neural Ray Surfaces (NRS), convolutional networks that represent pixel-wise projection rays.
We demonstrate the use of NRS for self-supervised learning of visual odometry and depth estimation from raw videos obtained using a wide variety of camera systems.
arXiv Detail & Related papers (2020-08-15T02:29:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.