Deep Manifold Transformation for Nonlinear Dimensionality Reduction
- URL: http://arxiv.org/abs/2010.14831v3
- Date: Mon, 3 May 2021 15:24:11 GMT
- Title: Deep Manifold Transformation for Nonlinear Dimensionality Reduction
- Authors: Stan Z. Li, Zelin Zang, Lirong Wu
- Abstract summary: We propose a deep manifold learning framework, called deep manifold transformation (DMT) for unsupervised NLDR and embedding learning.
DMT enhances deep neural networks by using cross-layer local geometry-preserving (LGP) constraints.
Experiments on synthetic and real-world data demonstrate that DMT networks outperform existing leading manifold-based NLDR methods in terms of preserving the structures of data.
- Score: 37.7499958388076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manifold learning-based encoders have been playing important roles in
nonlinear dimensionality reduction (NLDR) for data exploration. However,
existing methods can often fail to preserve geometric, topological and/or
distributional structures of data. In this paper, we propose a deep manifold
learning framework, called deep manifold transformation (DMT) for unsupervised
NLDR and embedding learning. DMT enhances deep neural networks by using
cross-layer local geometry-preserving (LGP) constraints. The LGP constraints
constitute the loss for deep manifold learning and serve as geometric
regularizers for NLDR network training. Extensive experiments on synthetic and
real-world data demonstrate that DMT networks outperform existing leading
manifold-based NLDR methods in terms of preserving the structures of data.
Related papers
- Deep Learning as Ricci Flow [38.27936710747996]
Deep neural networks (DNNs) are powerful tools for approximating the distribution of complex data.
We show that the transformations performed by DNNs during classification tasks have parallels to those expected under Hamilton's Ricci flow.
Our findings motivate the use of tools from differential and discrete geometry to the problem of explainability in deep learning.
arXiv Detail & Related papers (2024-04-22T15:12:47Z) - Deep Learning Weight Pruning with RMT-SVD: Increasing Accuracy and
Reducing Overfitting [0.0]
The spectrum of the weight layers of a deep neural network (DNN) can be studied and understood using techniques from random matrix theory (RMT)
In this work, these RMT techniques will be used to determine which and how many singular values should be removed from the weight layers of a DNN during training, via singular value decomposition (SVD)
We show the results on a simple DNN model trained on MNIST.
arXiv Detail & Related papers (2023-03-15T23:19:45Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Deep Recursive Embedding for High-Dimensional Data [9.611123249318126]
We propose to combine deep neural networks (DNN) with mathematics-guided embedding rules for high-dimensional data embedding.
We introduce a generic deep embedding network (DEN) framework, which is able to learn a parametric mapping from high-dimensional space to low-dimensional space.
arXiv Detail & Related papers (2021-10-31T23:22:33Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - Invertible Manifold Learning for Dimension Reduction [44.16432765844299]
Dimension reduction (DR) aims to learn low-dimensional representations of high-dimensional data with the preservation of essential information.
We propose a novel two-stage DR method, called invertible manifold learning (inv-ML) to bridge the gap between theoretical information-lossless and practical DR.
Experiments are conducted on seven datasets with a neural network implementation of inv-ML, called i-ML-Enc.
arXiv Detail & Related papers (2020-10-07T14:22:51Z) - A Tailored Convolutional Neural Network for Nonlinear Manifold Learning
of Computational Physics Data using Unstructured Spatial Discretizations [0.0]
We propose a nonlinear manifold learning technique based on deep convolutional autoencoders.
The technique is appropriate for model order reduction of physical systems in complex geometries.
arXiv Detail & Related papers (2020-06-11T02:19:34Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.