Fusing Climate Data Products using a Spatially Varying Autoencoder
- URL: http://arxiv.org/abs/2403.07822v1
- Date: Tue, 12 Mar 2024 17:03:07 GMT
- Title: Fusing Climate Data Products using a Spatially Varying Autoencoder
- Authors: Jacob A. Johnson, Matthew J. Heaton, William F. Christensen, Lynsie R.
Warr, and Summer B. Rupper
- Abstract summary: This research focuses on creating an identifiable and interpretable autoencoder.
The proposed autoencoder utilizes a Bayesian statistical framework.
We demonstrate the utility of the autoencoder by combining information from multiple precipitation products in High Mountain Asia.
- Score: 0.5825410941577593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autoencoders are powerful machine learning models used to compress
information from multiple data sources. However, autoencoders, like all
artificial neural networks, are often unidentifiable and uninterpretable. This
research focuses on creating an identifiable and interpretable autoencoder that
can be used to meld and combine climate data products. The proposed autoencoder
utilizes a Bayesian statistical framework, allowing for probabilistic
interpretations while also varying spatially to capture useful spatial patterns
across the various data products. Constraints are placed on the autoencoder as
it learns patterns in the data, creating an interpretable consensus that
includes the important features from each input. We demonstrate the utility of
the autoencoder by combining information from multiple precipitation products
in High Mountain Asia.
Related papers
- Interpreting Outliers in Time Series Data through Decoding Autoencoder [2.156170153103442]
This study focuses on manufacturing time series data from a German automotive supply industry.
We utilize autoencoders to compress the entire time series and then apply anomaly detection techniques to its latent features.
arXiv Detail & Related papers (2024-09-03T08:52:21Z) - Remote sensing framework for geological mapping via stacked autoencoders and clustering [0.15833270109954137]
We present an unsupervised machine learning-based framework for processing remote sensing data.
We use Landsat 8, ASTER, and Sentinel-2 datasets to evaluate the framework for geological mapping of the Mutawintji region in Australia.
Our results reveal that the framework produces accurate and interpretable geological maps, efficiently discriminating rock units.
arXiv Detail & Related papers (2024-04-02T09:15:32Z) - Generative Autoencoding of Dropout Patterns [11.965844936801801]
We propose a generative model termed Deciphering Autoencoders.
We assign a unique random dropout pattern to each data point in the training dataset.
We then train an autoencoder to reconstruct the corresponding data point using this pattern as information to be encoded.
arXiv Detail & Related papers (2023-10-03T00:54:13Z) - Learning Nonparametric High-Dimensional Generative Models: The
Empirical-Beta-Copula Autoencoder [1.5714999163044752]
It is necessary to model the autoencoder's latent space with a distribution from which samples can be obtained.
This study aims to discuss, assess, and compare various techniques that can be used to capture the latent space.
arXiv Detail & Related papers (2023-09-18T16:29:36Z) - PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels [59.66777287810985]
We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user.
We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks.
arXiv Detail & Related papers (2023-03-31T18:03:53Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - Dataset Condensation with Latent Space Knowledge Factorization and
Sharing [73.31614936678571]
We introduce a novel approach for solving dataset condensation problem by exploiting the regularity in a given dataset.
Instead of condensing the dataset directly in the original input space, we assume a generative process of the dataset with a set of learnable codes.
We experimentally show that our method achieves new state-of-the-art records by significant margins on various benchmark datasets.
arXiv Detail & Related papers (2022-08-21T18:14:08Z) - Mixture Model Auto-Encoders: Deep Clustering through Dictionary Learning [72.9458277424712]
Mixture Model Auto-Encoders (MixMate) is a novel architecture that clusters data by performing inference on a generative model.
We show that MixMate achieves competitive performance compared to state-of-the-art deep clustering algorithms.
arXiv Detail & Related papers (2021-10-10T02:30:31Z) - Neural Distributed Source Coding [59.630059301226474]
We present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions.
We evaluate our method on multiple datasets and show that our method can handle complex correlations and state-of-the-art PSNR.
arXiv Detail & Related papers (2021-06-05T04:50:43Z) - A Showcase of the Use of Autoencoders in Feature Learning Applications [11.329636084818778]
Autoencoders are techniques for data representation learning based on artificial neural networks.
This work presents these applications and provides details on how autoencoders can perform them, including code samples making use of an R package with an easy-to-use interface for autoencoder design and training.
arXiv Detail & Related papers (2020-05-08T23:56:26Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.