Learned 3D volumetric recovery of clouds and its uncertainty for climate
analysis
- URL: http://arxiv.org/abs/2403.05932v1
- Date: Sat, 9 Mar 2024 14:57:03 GMT
- Title: Learned 3D volumetric recovery of clouds and its uncertainty for climate
analysis
- Authors: Roi Ronen and Ilan Koren and Aviad Levis and Eshkol Eytan and Vadim
Holodovsky and Yoav Y. Schechner
- Abstract summary: Uncertainty in climate prediction and cloud physics is tied to observational gaps relating to shallow scattered clouds.
We design a learning-based model (ProbCT) to achieve CT of such clouds, based on noisy multi-view spaceborne images.
We demonstrate the approach in simulations and on real-world data, and indicate the relevance of 3D recovery and uncertainty to precipitation and renewable energy.
- Score: 16.260663741590253
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Significant uncertainty in climate prediction and cloud physics is tied to
observational gaps relating to shallow scattered clouds. Addressing these
challenges requires remote sensing of their three-dimensional (3D)
heterogeneous volumetric scattering content. This calls for passive scattering
computed tomography (CT). We design a learning-based model (ProbCT) to achieve
CT of such clouds, based on noisy multi-view spaceborne images. ProbCT infers -
for the first time - the posterior probability distribution of the
heterogeneous extinction coefficient, per 3D location. This yields arbitrary
valuable statistics, e.g., the 3D field of the most probable extinction and its
uncertainty. ProbCT uses a neural-field representation, making essentially
real-time inference. ProbCT undergoes supervised training by a new labeled
multi-class database of physics-based volumetric fields of clouds and their
corresponding images. To improve out-of-distribution inference, we incorporate
self-supervised learning through differential rendering. We demonstrate the
approach in simulations and on real-world data, and indicate the relevance of
3D recovery and uncertainty to precipitation and renewable energy.
Related papers
- Unsupervised Occupancy Learning from Sparse Point Cloud [8.732260277121547]
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities.
In this paper, we propose a method to infer occupancy fields instead of Neural Signed Distance Functions.
We highlight its capacity to improve implicit shape inference with respect to baselines and the state-of-the-art using synthetic and real data.
arXiv Detail & Related papers (2024-04-03T14:05:39Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Deceptive-NeRF/3DGS: Diffusion-Generated Pseudo-Observations for High-Quality Sparse-View Reconstruction [60.52716381465063]
We introduce Deceptive-NeRF/3DGS to enhance sparse-view reconstruction with only a limited set of input images.
Specifically, we propose a deceptive diffusion model turning noisy images rendered from few-view reconstructions into high-quality pseudo-observations.
Our system progressively incorporates diffusion-generated pseudo-observations into the training image sets, ultimately densifying the sparse input observations by 5 to 10 times.
arXiv Detail & Related papers (2023-05-24T14:00:32Z) - Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction [63.3021778885906]
3D bounding boxes are a widespread intermediate representation in many computer vision applications.
We propose methods for leveraging our autoregressive model to make high confidence predictions and meaningful uncertainty measures.
We release a simulated dataset, COB-3D, which highlights new types of ambiguity that arise in real-world robotics applications.
arXiv Detail & Related papers (2022-10-13T23:57:40Z) - On Triangulation as a Form of Self-Supervision for 3D Human Pose
Estimation [57.766049538913926]
Supervised approaches to 3D pose estimation from single images are remarkably effective when labeled data is abundant.
Much of the recent attention has shifted towards semi and (or) weakly supervised learning.
We propose to impose multi-view geometrical constraints by means of a differentiable triangulation and to use it as form of self-supervision during training when no labels are available.
arXiv Detail & Related papers (2022-03-29T19:11:54Z) - 3D Scattering Tomography by Deep Learning with Architecture Tailored to
Cloud Fields [12.139158398361866]
We present 3DeepCT, a deep neural network for computed tomography, which performs 3D reconstruction of scattering volumes from multi-view images.
We show that 3DeepCT outperforms physics-based inverse scattering methods in term of accuracy as well as offering a significant orders of magnitude improvement in computational time.
arXiv Detail & Related papers (2020-12-10T20:31:44Z) - Spatiotemporal tomography based on scattered multiangular signals and
its application for resolving evolving clouds using moving platforms [0.0]
We derive computed tomography (CT) of a time-varying translucent volumetric object, using a small number of moving cameras.
We demonstrate the approach on dynamic clouds, as clouds have a major effect on Earth's climate.
arXiv Detail & Related papers (2020-12-06T09:22:08Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - Real-time 3D Nanoscale Coherent Imaging via Physics-aware Deep Learning [0.7664249650622356]
We introduce 3D-CDI-NN, a deep convolutional neural network and differential programming framework trained to predict 3D structure and strain.
Our networks are designed to be "physics-aware" in multiple aspects.
Our integrated machine learning and differential programming solution is broadly applicable across inverse problems in other application areas.
arXiv Detail & Related papers (2020-06-16T18:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.