Joint Characterization of Spatiotemporal Data Manifolds
- URL: http://arxiv.org/abs/2108.09545v1
- Date: Sat, 21 Aug 2021 16:42:22 GMT
- Title: Joint Characterization of Spatiotemporal Data Manifolds
- Authors: Daniel Sousa and Christopher Small
- Abstract summary: Dimensionality reduction (DR) is a type of characterization designed to mitigate the "curse of dimensionality" on high-D signals.
Recent years have seen the additional development of a suite of nonlinear DR algorithms, frequently categorized as "manifold learning"
Here, we show these three DR approaches can yield complementary information about ST manifold topology.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spatiotemporal (ST) image data are increasingly common and often
high-dimensional (high-D). Modeling ST data can be a challenge due to the
plethora of independent and interacting processes which may or may not
contribute to the measurements. Characterization can be considered the
complement to modeling by helping guide assumptions about generative processes
and their representation in the data. Dimensionality reduction (DR) is a
frequently implemented type of characterization designed to mitigate the "curse
of dimensionality" on high-D signals. For decades, Principal Component (PC) and
Empirical Orthogonal Function (EOF) analysis has been used as a linear,
invertible approach to DR and ST analysis. Recent years have seen the
additional development of a suite of nonlinear DR algorithms, frequently
categorized as "manifold learning". Here, we explore the idea of joint
characterization of ST data manifolds using PCs/EOFs alongside two nonlinear DR
approaches: Laplacian Eigenmaps (LE) and t-distributed stochastic neighbor
embedding (t-SNE). Starting with a synthetic example and progressing to global,
regional, and field scale ST datasets spanning roughly 5 orders of magnitude in
space and 2 in time, we show these three DR approaches can yield complementary
information about ST manifold topology. Compared to the relatively diffuse TFS
produced by PCs/EOFs, the nonlinear approaches yield more compact manifolds
with decreased ambiguity in temporal endmembers (LE) and/or in spatiotemporal
clustering (t-SNE). These properties are compensated by the greater
interpretability, significantly lower computational demand and diminished
sensitivity to spatial aliasing for PCs/EOFs than LE or t-SNE. Taken together,
we find joint characterization using the three complementary DR approaches
capable of greater insight into generative ST processes than possible using any
single approach alone.
Related papers
- DiHuR: Diffusion-Guided Generalizable Human Reconstruction [51.31232435994026]
We introduce DiHuR, a Diffusion-guided model for generalizable Human 3D Reconstruction and view synthesis from sparse, minimally overlapping images.
Our method integrates two key priors in a coherent manner: the prior from generalizable feed-forward models and the 2D diffusion prior, and it requires only multi-view image training, without 3D supervision.
arXiv Detail & Related papers (2024-11-16T03:52:23Z) - Multi-Source and Test-Time Domain Adaptation on Multivariate Signals using Spatio-Temporal Monge Alignment [59.75420353684495]
Machine learning applications on signals such as computer vision or biomedical data often face challenges due to the variability that exists across hardware devices or session recordings.
In this work, we propose Spatio-Temporal Monge Alignment (STMA) to mitigate these variabilities.
We show that STMA leads to significant and consistent performance gains between datasets acquired with very different settings.
arXiv Detail & Related papers (2024-07-19T13:33:38Z) - Kernel spectral joint embeddings for high-dimensional noisy datasets using duo-landmark integral operators [9.782959684053631]
We propose a novel kernel spectral method that achieves joint embeddings of two independently observed high-dimensional noisy datasets.
The obtained low-dimensional embeddings can be utilized for many downstream tasks such as simultaneous clustering, data visualization, and denoising.
arXiv Detail & Related papers (2024-05-20T18:29:36Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Averaging Spatio-temporal Signals using Optimal Transport and Soft
Alignments [110.79706180350507]
We show that our proposed loss can be used to define temporal-temporal baryechecenters as Fr'teche means duality.
Experiments on handwritten letters and brain imaging data confirm our theoretical findings.
arXiv Detail & Related papers (2022-03-11T09:46:22Z) - Learning Generative Prior with Latent Space Sparsity Constraints [25.213673771175692]
It has been argued that the distribution of natural images do not lie in a single manifold but rather lie in a union of several submanifolds.
We propose a sparsity-driven latent space sampling (SDLSS) framework and develop a proximal meta-learning (PML) algorithm to enforce sparsity in the latent space.
The results demonstrate that for a higher degree of compression, the SDLSS method is more efficient than the state-of-the-art method.
arXiv Detail & Related papers (2021-05-25T14:12:04Z) - Invertible Manifold Learning for Dimension Reduction [44.16432765844299]
Dimension reduction (DR) aims to learn low-dimensional representations of high-dimensional data with the preservation of essential information.
We propose a novel two-stage DR method, called invertible manifold learning (inv-ML) to bridge the gap between theoretical information-lossless and practical DR.
Experiments are conducted on seven datasets with a neural network implementation of inv-ML, called i-ML-Enc.
arXiv Detail & Related papers (2020-10-07T14:22:51Z) - Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral
Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction [48.73525876467408]
We propose a novel technique for hyperspectral subspace analysis.
The technique is called joint and progressive subspace analysis (JPSA)
Experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets.
arXiv Detail & Related papers (2020-09-21T16:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.