DIRESA, a distance-preserving nonlinear dimension reduction technique based on regularized autoencoders
- URL: http://arxiv.org/abs/2404.18314v1
- Date: Sun, 28 Apr 2024 20:54:57 GMT
- Title: DIRESA, a distance-preserving nonlinear dimension reduction technique based on regularized autoencoders
- Authors: Geert De Paepe, Lesley De Cruz,
- Abstract summary: In meteorology, finding similar weather patterns or analogs in historical datasets can be useful for data assimilation, forecasting, and postprocessing.
In climate science, analogs in historical and climate projection data are used for attribution and impact studies.
We propose a dimension reduction technique based on autoencoder (AE) neural networks to compress those datasets and perform the search in an interpretable, compressed latent space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In meteorology, finding similar weather patterns or analogs in historical datasets can be useful for data assimilation, forecasting, and postprocessing. In climate science, analogs in historical and climate projection data are used for attribution and impact studies. However, most of the time, those large weather and climate datasets are nearline. They must be downloaded, which takes a lot of bandwidth and disk space, before the computationally expensive search can be executed. We propose a dimension reduction technique based on autoencoder (AE) neural networks to compress those datasets and perform the search in an interpretable, compressed latent space. A distance-regularized Siamese twin autoencoder (DIRESA) architecture is designed to preserve distance in latent space while capturing the nonlinearities in the datasets. Using conceptual climate models of different complexities, we show that the latent components thus obtained provide physical insight into the dominant modes of variability in the system. Compressing datasets with DIRESA reduces the online storage and keeps the latent components uncorrelated, while the distance (ordering) preservation and reconstruction fidelity robustly outperform Principal Component Analysis (PCA) and other dimension reduction techniques such as UMAP or variational autoencoders.
Related papers
- Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation [51.44054828384487]
We propose a novel parameterization method dubbed Hierarchical Generative Latent Distillation (H-GLaD)
This method systematically explores hierarchical layers within the generative adversarial networks (GANs)
In addition, we introduce a novel class-relevant feature distance metric to alleviate the computational burden associated with synthetic dataset evaluation.
arXiv Detail & Related papers (2024-06-09T09:15:54Z) - Machine Learning Techniques for Data Reduction of Climate Applications [11.55089543867768]
We present a pipelined compression approach that first uses neural-network-based techniques to derive regions where QoI are highly likely to be present.
We employ a Guaranteed Autoencoder (GAE) to compress data with differential error bounds.
Results are presented for climate data generated from the E3SM Simulation model for downstream quantities such as tropical cyclone and atmospheric river detection and tracking.
arXiv Detail & Related papers (2024-05-01T21:44:47Z) - Distributional Principal Autoencoders [2.519266955671697]
Dimension reduction techniques usually lose information in the sense that reconstructed data are not identical to the original data.
We propose Distributional Principal Autoencoder (DPA) that consists of an encoder that maps high-dimensional data to low-dimensional latent variables.
For reconstructing data, the DPA decoder aims to match the conditional distribution of all data that are mapped to a certain latent value.
arXiv Detail & Related papers (2024-04-21T12:52:04Z) - Computationally and Memory-Efficient Robust Predictive Analytics Using Big Data [0.0]
This study navigates through the challenges of data uncertainties, storage limitations, and predictive data-driven modeling using big data.
We utilize Robust Principal Component Analysis (RPCA) for effective noise reduction and outlier elimination, and Optimal Sensor Placement (OSP) for efficient data compression and storage.
arXiv Detail & Related papers (2024-03-27T22:39:08Z) - History Matching for Geological Carbon Storage using Data-Space
Inversion with Spatio-Temporal Data Parameterization [0.0]
In data-space inversion (DSI), history-matched quantities of interest are inferred directly, without constructing posterior geomodels.
This is accomplished efficiently using a set of O(1000) prior simulation results, data parameterization, and posterior sampling within a Bayesian setting.
The new parameterization uses an adversarial autoencoder (AAE) for dimension reduction and a convolutional long short-term memory (convLSTM) network to represent the spatial distribution and temporal evolution of the pressure and saturation fields.
arXiv Detail & Related papers (2023-10-05T00:50:06Z) - ARFA: An Asymmetric Receptive Field Autoencoder Model for Spatiotemporal
Prediction [55.30913411696375]
We propose an Asymmetric Receptive Field Autoencoder (ARFA) model, which introduces corresponding sizes of receptive field modules.
In the encoder, we present large kernel module for globaltemporal feature extraction. In the decoder, we develop a small kernel module for localtemporal reconstruction.
We construct the RainBench, a large-scale radar echo dataset for precipitation prediction, to address the scarcity of meteorological data in the domain.
arXiv Detail & Related papers (2023-09-01T07:55:53Z) - ClimaX: A foundation model for weather and climate [51.208269971019504]
ClimaX is a deep learning model for weather and climate science.
It can be pre-trained with a self-supervised learning objective on climate datasets.
It can be fine-tuned to address a breadth of climate and weather tasks.
arXiv Detail & Related papers (2023-01-24T23:19:01Z) - Dataset Condensation with Latent Space Knowledge Factorization and
Sharing [73.31614936678571]
We introduce a novel approach for solving dataset condensation problem by exploiting the regularity in a given dataset.
Instead of condensing the dataset directly in the original input space, we assume a generative process of the dataset with a set of learnable codes.
We experimentally show that our method achieves new state-of-the-art records by significant margins on various benchmark datasets.
arXiv Detail & Related papers (2022-08-21T18:14:08Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.