DNN-based 3D Cloud Retrieval for Variable Solar Illumination and Multiview Spaceborne Imaging
- URL: http://arxiv.org/abs/2411.04682v1
- Date: Thu, 07 Nov 2024 13:13:23 GMT
- Title: DNN-based 3D Cloud Retrieval for Variable Solar Illumination and Multiview Spaceborne Imaging
- Authors: Tamar Klein, Tom Aizenberg, Roi Ronen,
- Abstract summary: We introduce the first scalable deep neural network-based system for 3D cloud retrieval.
By integrating multiview cloud intensity images with camera poses and solar direction data, we achieve greater flexibility in recovery.
- Score: 2.6968321526169508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Climate studies often rely on remotely sensed images to retrieve two-dimensional maps of cloud properties. To advance volumetric analysis, we focus on recovering the three-dimensional (3D) heterogeneous extinction coefficient field of shallow clouds using multiview remote sensing data. Climate research requires large-scale worldwide statistics. To enable scalable data processing, previous deep neural networks (DNNs) can infer at spaceborne remote sensing downlink rates. However, prior methods are limited to a fixed solar illumination direction. In this work, we introduce the first scalable DNN-based system for 3D cloud retrieval that accommodates varying camera poses and solar directions. By integrating multiview cloud intensity images with camera poses and solar direction data, we achieve greater flexibility in recovery. Training of the DNN is performed by a novel two-stage scheme to address the high number of degrees of freedom in this problem. Our approach shows substantial improvements over previous state-of-the-art, particularly in handling variations in the sun's zenith angle.
Related papers
- AIM2PC: Aerial Image to 3D Building Point Cloud Reconstruction [2.9998889086656586]
Recent methods primarily focus on rooftops from aerial images, often overlooking essential geometrical details.
There is a notable lack of datasets containing complete 3D point clouds for entire buildings, along with challenges in obtaining reliable camera pose information for aerial images.
This paper presents a novel methodology, AIM2PC, which utilizes our generated dataset that includes complete 3D point clouds determined camera poses.
arXiv Detail & Related papers (2025-03-24T10:34:07Z) - 3D Cloud reconstruction through geospatially-aware Masked Autoencoders [1.4124182346539256]
This study leverages geostationary imagery from MSG/SEVIRI and radar reflectivity measurements of cloud profiles from CloudSat/CPR to reconstruct 3D cloud structures.
We first apply self-supervised learning (SSL) methods-Masked Autoencoders (MAE) and geospatially-aware SatMAE on unlabelled MSG images, and then fine-tune our models on matched image-profile pairs.
arXiv Detail & Related papers (2025-01-03T12:26:04Z) - Deep Learning for 3D Point Cloud Enhancement: A Survey [7.482216242644069]
This paper presents a comprehensive survey for deep-learning-based point cloud enhancement methods.
It covers three main perspectives for point cloud enhancement, i.e., denoising to achieve clean data, completion to recover unseen data, and upsampling to obtain dense data.
Our survey presents a new taxonomy for recent state-of-the-art methods and systematic experimental results on standard benchmarks.
arXiv Detail & Related papers (2024-10-30T15:07:06Z) - HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View Distillation [106.09886920774002]
We present a hybrid-view-based knowledge distillation framework, termed HVDistill, to guide the feature learning of a point cloud neural network.
Our method achieves consistent improvements over the baseline trained from scratch and significantly out- performs the existing schemes.
arXiv Detail & Related papers (2024-03-18T14:18:08Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds [100.03877236181546]
PolarMix is a point cloud augmentation technique that is simple and generic.
It can work as plug-and-play for various 3D deep architectures and also performs well for unsupervised domain adaptation.
arXiv Detail & Related papers (2022-07-30T13:52:19Z) - Analyzing General-Purpose Deep-Learning Detection and Segmentation
Models with Images from a Lidar as a Camera Sensor [0.06554326244334865]
This work explores the potential of general-purpose DL perception algorithms for processing image-like outputs of advanced lidar sensors.
Rather than processing the three-dimensional point cloud data, this is, to the best of our knowledge, the first work to focus on low-resolution images with 360text field of view.
We show that with adequate preprocessing, general-purpose DL models can process these images, opening the door to their usage in environmental conditions.
arXiv Detail & Related papers (2022-03-08T13:14:43Z) - SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for
Spatial-Aware Visual Representations [85.38562724999898]
We propose a 2D Image and 3D Point cloud Unsupervised pre-training strategy, called SimIPU.
Specifically, we develop a multi-modal contrastive learning framework that consists of an intra-modal spatial perception module and an inter-modal feature interaction module.
To the best of our knowledge, this is the first study to explore contrastive learning pre-training strategies for outdoor multi-modal datasets.
arXiv Detail & Related papers (2021-12-09T03:27:00Z) - Pix2Point: Learning Outdoor 3D Using Sparse Point Clouds and Optimal
Transport [35.10680020334443]
deep learning has recently provided us with excellent results for monocular depth estimation.
We propose Pix2Point, a deep learning-based approach for monocular 3D point cloud prediction.
Our method relies on a 2D-3D hybrid neural network architecture, and a supervised end-to-end minimisation of an optimal transport divergence.
arXiv Detail & Related papers (2021-07-30T09:03:39Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - View Invariant Human Body Detection and Pose Estimation from Multiple
Depth Sensors [0.7080990243618376]
We propose an end-to-end multi-person 3D pose estimation network, Point R-CNN, using multiple point cloud sources.
We conduct extensive experiments to simulate challenging real world cases, such as individual camera failures, various target appearances, and complex cluttered scenes.
In the meantime, we show our end-to-end network greatly outperforms cascaded state-of-the-art models.
arXiv Detail & Related papers (2020-05-08T19:06:28Z) - Deep Learning for 3D Point Clouds: A Survey [58.954684611055]
This paper presents a review of recent progress in deep learning methods for point clouds.
It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation.
It also presents comparative results on several publicly available datasets.
arXiv Detail & Related papers (2019-12-27T09:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.