Cloud Optical Thickness Retrievals Using Angle Invariant Attention Based Deep Learning Models
- URL: http://arxiv.org/abs/2505.24638v1
- Date: Fri, 30 May 2025 14:26:30 GMT
- Title: Cloud Optical Thickness Retrievals Using Angle Invariant Attention Based Deep Learning Models
- Authors: Zahid Hassan Tushar, Adeleke Ademakinwa, Jianwu Wang, Zhibo Zhang, Sanjay Purushotham,
- Abstract summary: Cloud Optical Thickness (COT) is a critical cloud property influencing Earth's climate, weather, and radiation budget.<n>We propose a novel angle-invariant, attention-based deep model called Cloud-Attention-Net with Angle Coding (CAAC)<n>CAAC significantly outperforms existing state-of-the-art deep learning models, reducing cloud property retrieval errors by at least a factor of nine.
- Score: 7.86932319873743
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cloud Optical Thickness (COT) is a critical cloud property influencing Earth's climate, weather, and radiation budget. Satellite radiance measurements enable global COT retrieval, but challenges like 3D cloud effects, viewing angles, and atmospheric interference must be addressed to ensure accurate estimation. Traditionally, the Independent Pixel Approximation (IPA) method, which treats individual pixels independently, has been used for COT estimation. However, IPA introduces significant bias due to its simplified assumptions. Recently, deep learning-based models have shown improved performance over IPA but lack robustness, as they are sensitive to variations in radiance intensity, distortions, and cloud shadows. These models also introduce substantial errors in COT estimation under different solar and viewing zenith angles. To address these challenges, we propose a novel angle-invariant, attention-based deep model called Cloud-Attention-Net with Angle Coding (CAAC). Our model leverages attention mechanisms and angle embeddings to account for satellite viewing geometry and 3D radiative transfer effects, enabling more accurate retrieval of COT. Additionally, our multi-angle training strategy ensures angle invariance. Through comprehensive experiments, we demonstrate that CAAC significantly outperforms existing state-of-the-art deep learning models, reducing cloud property retrieval errors by at least a factor of nine.
Related papers
- Towards Explicit Geometry-Reflectance Collaboration for Generalized LiDAR Segmentation in Adverse Weather [58.4718010073085]
Existing LiDAR segmentation models often suffer from decreased accuracy when exposed to adverse weather conditions.<n>Recent methods addressing this issue focus on enhancing training data through weather simulation or universal augmentation techniques.<n>We propose a novel Geometry-Reflectance Collaboration framework that explicitly separates feature extraction for geometry and reflectance.
arXiv Detail & Related papers (2025-06-03T03:23:43Z) - Joint Retrieval of Cloud properties using Attention-based Deep Learning Models [7.86932319873743]
We introduce CloudUNet with Attention Module (CAM), a compact UNet-based model that employs attention mechanisms to reduce errors in thick, overlapping cloud regions.<n>Our CAM model outperforms state-of-the-art deep learning methods, reducing mean absolute errors (MAE) by 34% for COT and 42% for CER, and achieving 76% and 86% lower MAE for COT and CER retrievals compared to the IPA method.
arXiv Detail & Related papers (2025-04-04T03:01:19Z) - RSAR: Restricted State Angle Resolver and Rotated SAR Benchmark [61.987291551925516]
We introduce the Unit Cycle Resolver, which incorporates a unit circle constraint loss to improve angle prediction accuracy.<n>Our approach can effectively improve the performance of existing state-of-the-art weakly supervised methods.<n>With the aid of UCR, we further annotate and introduce RSAR, the largest multi-class rotated SAR object detection dataset to date.
arXiv Detail & Related papers (2025-01-08T11:41:47Z) - MODEL&CO: Exoplanet detection in angular differential imaging by learning across multiple observations [37.845442465099396]
Most post-processing methods build a model of the nuisances from the target observations themselves.
We propose to build the nuisance model from an archive of multiple observations by leveraging supervised deep learning techniques.
We apply the proposed algorithm to several datasets from the VLT/SPHERE instrument, and demonstrate a superior precision-recall trade-off.
arXiv Detail & Related papers (2024-09-23T09:22:45Z) - SphereDiffusion: Spherical Geometry-Aware Distortion Resilient Diffusion Model [63.685132323224124]
Controllable spherical panoramic image generation holds substantial applicative potential across a variety of domains.
In this paper, we introduce a novel framework of SphereDiffusion to address these unique challenges.
Experiments on Structured3D dataset show that SphereDiffusion significantly improves the quality of controllable spherical image generation and relatively reduces around 35% FID on average.
arXiv Detail & Related papers (2024-03-15T06:26:46Z) - Learned 3D volumetric recovery of clouds and its uncertainty for climate
analysis [16.260663741590253]
Uncertainty in climate prediction and cloud physics is tied to observational gaps relating to shallow scattered clouds.
We design a learning-based model (ProbCT) to achieve CT of such clouds, based on noisy multi-view spaceborne images.
We demonstrate the approach in simulations and on real-world data, and indicate the relevance of 3D recovery and uncertainty to precipitation and renewable energy.
arXiv Detail & Related papers (2024-03-09T14:57:03Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Cloud removal Using Atmosphere Model [7.259230333873744]
Cloud removal is an essential task in remote sensing data analysis.
We propose to use scattering model for temporal sequence of images of any scene in the framework of low rank and sparse models.
We develop a semi-realistic simulation method to produce cloud cover so that various methods can be quantitatively analysed.
arXiv Detail & Related papers (2022-10-05T01:29:19Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Benchmarking of Deep Learning Irradiance Forecasting Models from Sky
Images -- an in-depth Analysis [0.0]
We train four commonly used Deep Learning architectures to forecast solar irradiance from sequences of hemispherical sky images.
Results show that encodingtemporal aspects greatly improved the predictions with 10 min Forecast Skill reaching 20.4% on the test year.
We conclude that, with a common setup, Deep Learning models tend to behave just as a'very smart persistence model', temporally aligned with the persistence model while mitigating its most penalising errors.
arXiv Detail & Related papers (2021-02-01T09:31:14Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.