Tensor Train Random Projection
- URL: http://arxiv.org/abs/2010.10797v4
- Date: Wed, 20 Oct 2021 13:34:49 GMT
- Title: Tensor Train Random Projection
- Authors: Yani Feng, Kejun Tang, Lianxing He, Pingqiang Zhou, Qifeng Liao
- Abstract summary: This work proposes a novel tensor train random projection (TTRP) method for dimension reduction.
Our TTRP is systematically constructed through a tensor train representation with TT-ranks equal to one.
Based on the tensor train format, this new random projection method can speed up the dimension reduction procedure for high-dimensional datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes a novel tensor train random projection (TTRP) method for
dimension reduction, where pairwise distances can be approximately preserved.
Our TTRP is systematically constructed through a tensor train (TT)
representation with TT-ranks equal to one. Based on the tensor train format,
this new random projection method can speed up the dimension reduction
procedure for high-dimensional datasets and requires less storage costs with
little loss in accuracy, compared with existing methods. We provide a
theoretical analysis of the bias and the variance of TTRP, which shows that
this approach is an expected isometric projection with bounded variance, and we
show that the Rademacher distribution is an optimal choice for generating the
corresponding TT-cores. Detailed numerical experiments with synthetic datasets
and the MNIST dataset are conducted to demonstrate the efficiency of TTRP.
Related papers
- Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - On Statistical Rates and Provably Efficient Criteria of Latent Diffusion Transformers (DiTs) [12.810268045479992]
We study the universal approximation and sample complexity of the DiTs score function.
We show that latent DiTs have the potential to bypass the challenges associated with the high dimensionality of initial data.
arXiv Detail & Related papers (2024-07-01T08:34:40Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Multi-mode Tensor Train Factorization with Spatial-spectral
Regularization for Remote Sensing Images Recovery [1.3272510644778104]
We propose a novel low-MTT-rank tensor completion model via multi-mode TT factorization and spatial-spectral smoothness regularization.
We show that the proposed MTTD3R method outperforms compared methods in terms of visual and quantitative measures.
arXiv Detail & Related papers (2022-05-05T07:36:08Z) - Rademacher Random Projections with Tensor Networks [10.140147080535222]
We consider a tensorizedrandom projection relying on Train decomposition.
Experiments onsynthetic data demonstrate that tensorized Rademacher RP can outperform thetensorized Gaussian RP.
arXiv Detail & Related papers (2021-10-26T19:18:20Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Comparing Probability Distributions with Conditional Transport [63.11403041984197]
We propose conditional transport (CT) as a new divergence and approximate it with the amortized CT (ACT) cost.
ACT amortizes the computation of its conditional transport plans and comes with unbiased sample gradients that are straightforward to compute.
On a wide variety of benchmark datasets generative modeling, substituting the default statistical distance of an existing generative adversarial network with ACT is shown to consistently improve the performance.
arXiv Detail & Related papers (2020-12-28T05:14:22Z) - Optimal High-order Tensor SVD via Tensor-Train Orthogonal Iteration [10.034394572576922]
We propose a new algorithm to estimate the low tensor-train rank structure from the noisy high-order tensor observation.
The merits of the proposed TTOI are illustrated through applications to estimation and dimension reduction of high-order Markov processes.
arXiv Detail & Related papers (2020-10-06T05:18:24Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z) - Tensorized Random Projections [8.279639493543401]
We propose two tensorized random projection maps relying on the tensor train(TT) and CP decomposition format, respectively.
The two maps offer very low memory requirements and can be applied efficiently when the inputs are low rank tensors given in the CP or TT format.
Our results reveal that the TT format is substantially superior to CP in terms of the size of the random projection needed to achieve the same distortion ratio.
arXiv Detail & Related papers (2020-03-11T03:56:44Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.