Dissipative residual layers for unsupervised implicit parameterization
of data manifolds
- URL: http://arxiv.org/abs/2210.07100v1
- Date: Thu, 13 Oct 2022 15:28:29 GMT
- Title: Dissipative residual layers for unsupervised implicit parameterization
of data manifolds
- Authors: Viktor Reshniak
- Abstract summary: In our approach, the data is assumed to belong to a lower dimensional manifold in a higher dimensional space.
Under this assumption, the data manifold is an attractive manifold of a dynamical system to be estimated.
We parameterize such a dynamical system with a residual neural network and propose a spectral localization technique to ensure it is locally attractive in the vicinity of data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an unsupervised technique for implicit parameterization of data
manifolds. In our approach, the data is assumed to belong to a lower
dimensional manifold in a higher dimensional space, and the data points are
viewed as the endpoints of the trajectories originating outside the manifold.
Under this assumption, the data manifold is an attractive manifold of a
dynamical system to be estimated. We parameterize such a dynamical system with
a residual neural network and propose a spectral localization technique to
ensure it is locally attractive in the vicinity of data. We also present
initialization and additional regularization of the proposed residual layers. %
that we call dissipative bottlenecks. We mention the importance of the
considered problem for the tasks of reinforcement learning and support our
discussion with examples demonstrating the performance of the proposed layers
in denoising and generative tasks.
Related papers
- Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation [51.44054828384487]
We propose a novel parameterization method dubbed Hierarchical Generative Latent Distillation (H-GLaD)
This method systematically explores hierarchical layers within the generative adversarial networks (GANs)
In addition, we introduce a novel class-relevant feature distance metric to alleviate the computational burden associated with synthetic dataset evaluation.
arXiv Detail & Related papers (2024-06-09T09:15:54Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Polynomial Chaos Expansions on Principal Geodesic Grassmannian
Submanifolds for Surrogate Modeling and Uncertainty Quantification [0.41709348827585524]
We introduce a manifold learning-based surrogate modeling framework for uncertainty in high-dimensional systems.
We employ Principal Geodesic Analysis on the Grassmann manifold of the response to identify a set of disjoint principal geodesic submanifolds.
Polynomial chaos expansion is then used to construct a mapping between the random input parameters and the projection of the response.
arXiv Detail & Related papers (2024-01-30T02:13:02Z) - Multi-Linear Kernel Regression and Imputation in Data Manifolds [12.15802365851407]
This paper introduces an efficient multi-linear nonparametric approximation framework for data regression and imputation, and its application to dynamic magnetic-resonance imaging (dMRI)
Data features are assumed to reside in or close to a smooth manifold embedded in a kernel reproducing Hilbert space. Landmark points are identified to describe the point cloud of features by linear approximating patches which mimic the concept of tangent spaces to smooth.
The multi-linear model effects dimensionality reduction, enables efficient computations, and extracts data patterns and their geometry without any training data or additional information.
arXiv Detail & Related papers (2023-04-06T12:58:52Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Learning Low-Dimensional Nonlinear Structures from High-Dimensional
Noisy Data: An Integral Operator Approach [5.975670441166475]
We propose a kernel-spectral embedding algorithm for learning low-dimensional nonlinear structures from high-dimensional and noisy observations.
The algorithm employs an adaptive bandwidth selection procedure which does not rely on prior knowledge of the underlying manifold.
The obtained low-dimensional embeddings can be further utilized for downstream purposes such as data visualization, clustering and prediction.
arXiv Detail & Related papers (2022-02-28T22:46:34Z) - Inferring Manifolds From Noisy Data Using Gaussian Processes [17.166283428199634]
Most existing manifold learning algorithms replace the original data with lower dimensional coordinates.
This article proposes a new methodology for addressing these problems, allowing the estimated manifold between fitted data points.
arXiv Detail & Related papers (2021-10-14T15:50:38Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Deep Dimension Reduction for Supervised Representation Learning [51.10448064423656]
We propose a deep dimension reduction approach to learning representations with essential characteristics.
The proposed approach is a nonparametric generalization of the sufficient dimension reduction method.
We show that the estimated deep nonparametric representation is consistent in the sense that its excess risk converges to zero.
arXiv Detail & Related papers (2020-06-10T14:47:43Z) - Two-Dimensional Semi-Nonnegative Matrix Factorization for Clustering [50.43424130281065]
We propose a new Semi-Nonnegative Matrix Factorization method for 2-dimensional (2D) data, named TS-NMF.
It overcomes the drawback of existing methods that seriously damage the spatial information of the data by converting 2D data to vectors in a preprocessing step.
arXiv Detail & Related papers (2020-05-19T05:54:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.