Data-driven Cloud Clustering via a Rotationally Invariant Autoencoder
- URL: http://arxiv.org/abs/2103.04885v1
- Date: Mon, 8 Mar 2021 16:45:14 GMT
- Title: Data-driven Cloud Clustering via a Rotationally Invariant Autoencoder
- Authors: Takuya Kurihana, Elisabeth Moyer, Rebecca Willett, Davis Gilton, and
Ian Foster
- Abstract summary: We describe an automated rotation-invariant cloud clustering (RICC) method.
It organizes cloud imagery within large datasets in an unsupervised fashion.
Results suggest that the resultant cloud clusters capture meaningful aspects of cloud physics.
- Score: 10.660968055962325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advanced satellite-born remote sensing instruments produce high-resolution
multi-spectral data for much of the globe at a daily cadence. These datasets
open up the possibility of improved understanding of cloud dynamics and
feedback, which remain the biggest source of uncertainty in global climate
model projections. As a step towards answering these questions, we describe an
automated rotation-invariant cloud clustering (RICC) method that leverages deep
learning autoencoder technology to organize cloud imagery within large datasets
in an unsupervised fashion, free from assumptions about predefined classes. We
describe both the design and implementation of this method and its evaluation,
which uses a sequence of testing protocols to determine whether the resulting
clusters: (1) are physically reasonable, (i.e., embody scientifically relevant
distinctions); (2) capture information on spatial distributions, such as
textures; (3) are cohesive and separable in latent space; and (4) are
rotationally invariant, (i.e., insensitive to the orientation of an image).
Results obtained when these evaluation protocols are applied to RICC outputs
suggest that the resultant novel cloud clusters capture meaningful aspects of
cloud physics, are appropriately spatially coherent, and are invariant to
orientations of input images. Our results support the possibility of using an
unsupervised data-driven approach for automated clustering and pattern
discovery in cloud imagery.
Related papers
- PGCS: Physical Law embedded Generative Cloud Synthesis in Remote Sensing Images [9.655563155560658]
Physical law embedded generative cloud synthesis method (PGCS) is proposed to generate diverse realistic cloud images to enhance real data.
Two cloud correction methods are developed from PGCS and exhibits a superior performance compared to state-of-the-art methods in the cloud correction task.
arXiv Detail & Related papers (2024-10-22T12:36:03Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Point Cloud Compression with Implicit Neural Representations: A Unified Framework [54.119415852585306]
We present a pioneering point cloud compression framework capable of handling both geometry and attribute components.
Our framework utilizes two coordinate-based neural networks to implicitly represent a voxelized point cloud.
Our method exhibits high universality when contrasted with existing learning-based techniques.
arXiv Detail & Related papers (2024-05-19T09:19:40Z) - Distribution-aware Interactive Attention Network and Large-scale Cloud
Recognition Benchmark on FY-4A Satellite Image [24.09239785062109]
We develop a novel dataset for accurate cloud recognition.
We use domain adaptation methods to align 70,419 image-label pairs in terms of projection, temporal resolution, and spatial resolution.
We also introduce a Distribution-aware Interactive-Attention Network (DIAnet), which preserves pixel-level details through a high-resolution branch and a parallel cross-branch.
arXiv Detail & Related papers (2024-01-06T09:58:09Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - Hyperspherical Embedding for Point Cloud Completion [25.41194214006682]
This paper proposes a hyperspherical module, which transforms and normalizes embeddings from the encoder to be on a unit hypersphere.
We theoretically analyze the hyperspherical embedding and show that it enables more stable training with a wider range of learning rates and more compact embedding distributions.
Experiment results show consistent improvement of point cloud completion in both single-task and multi-task learning.
arXiv Detail & Related papers (2023-07-11T08:18:37Z) - Insight into cloud processes from unsupervised classification with a
rotationally invariant autoencoder [10.739352302280667]
Current cloud classification schemes are based on single-pixel cloud properties and cannot consider spatial structures and textures.
Recent advances in computer vision enable the grouping of different patterns of images without using human predefined labels.
We describe the use of such methods to generate a new AI-driven Cloud Classification Atlas (AICCA)
arXiv Detail & Related papers (2022-11-02T04:08:32Z) - Data Augmentation-free Unsupervised Learning for 3D Point Cloud
Understanding [61.30276576646909]
We propose an augmentation-free unsupervised approach for point clouds to learn transferable point-level features via soft clustering, named SoftClu.
We exploit the affiliation of points to their clusters as a proxy to enable self-training through a pseudo-label prediction task.
arXiv Detail & Related papers (2022-10-06T10:18:16Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Learning Rotation-Invariant Representations of Point Clouds Using
Aligned Edge Convolutional Neural Networks [29.3830445533532]
Point cloud analysis is an area of increasing interest due to the development of 3D sensors that are able to rapidly measure the depth of scenes accurately.
Applying deep learning techniques to perform point cloud analysis is non-trivial due to the inability of these methods to generalize to unseen rotations.
To address this limitation, one usually has to augment the training data, which can lead to extra computation and require larger model complexity.
This paper proposes a new neural network called the Aligned Edge Convolutional Neural Network (AECNN) that learns a feature representation of point clouds relative to Local Reference Frames (LRFs)
arXiv Detail & Related papers (2021-01-02T17:36:00Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.