Oblivious subspace embeddings for compressed Tucker decompositions
- URL: http://arxiv.org/abs/2406.09387v1
- Date: Thu, 13 Jun 2024 17:58:32 GMT
- Title: Oblivious subspace embeddings for compressed Tucker decompositions
- Authors: Matthew Pietrosanu, Bei Jiang, Linglong Kong,
- Abstract summary: This work establishes general Johnson-Lindenstrauss type guarantees for the estimation of Tucker decompositions.
On moderately large face image and fMRI neuroimaging datasets, empirical results show that substantial dimension reduction is possible.
- Score: 8.349583867022204
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Emphasis in the tensor literature on random embeddings (tools for low-distortion dimension reduction) for the canonical polyadic (CP) tensor decomposition has left analogous results for the more expressive Tucker decomposition comparatively lacking. This work establishes general Johnson-Lindenstrauss (JL) type guarantees for the estimation of Tucker decompositions when an oblivious random embedding is applied along each mode. When these embeddings are drawn from a JL-optimal family, the decomposition can be estimated within $\varepsilon$ relative error under restrictions on the embedding dimension that are in line with recent CP results. We implement a higher-order orthogonal iteration (HOOI) decomposition algorithm with random embeddings to demonstrate the practical benefits of this approach and its potential to improve the accessibility of otherwise prohibitive tensor analyses. On moderately large face image and fMRI neuroimaging datasets, empirical results show that substantial dimension reduction is possible with minimal increase in reconstruction error relative to traditional HOOI ($\leq$5% larger error, 50%-60% lower computation time for large models with 50% dimension reduction along each mode). Especially for large tensors, our method outperforms traditional higher-order singular value decomposition (HOSVD) and recently proposed TensorSketch methods.
Related papers
- Convolutional Neural Network Compression Based on Low-Rank Decomposition [3.3295360710329738]
This paper proposes a model compression method that integrates Variational Bayesian Matrix Factorization.
VBMF is employed to estimate the rank of the weight tensor at each layer.
Experimental results show that for both high and low compression ratios, our compression model exhibits advanced performance.
arXiv Detail & Related papers (2024-08-29T06:40:34Z) - Scalable and Robust Tensor Ring Decomposition for Large-scale Data [12.02023514105999]
We propose a scalable and robust TR decomposition algorithm capable of handling large-scale tensor data with missing entries and gross corruptions.
We first develop a novel auto-weighted steepest descent method that can adaptively fill the missing entries and identify the outliers during the decomposition process.
arXiv Detail & Related papers (2023-05-15T22:08:47Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Orthogonal Matrix Retrieval with Spatial Consensus for 3D Unknown-View
Tomography [58.60249163402822]
Unknown-view tomography (UVT) reconstructs a 3D density map from its 2D projections at unknown, random orientations.
The proposed OMR is more robust and performs significantly better than the previous state-of-the-art OMR approach.
arXiv Detail & Related papers (2022-07-06T21:40:59Z) - TensoRF: Tensorial Radiance Fields [74.16791688888081]
We present TensoRF, a novel approach to model and reconstruct radiance fields.
We model the radiance field of a scene as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features.
We show that TensoRF with CP decomposition achieves fast reconstruction (30 min) with better rendering quality and even a smaller model size (4 MB) compared to NeRF.
arXiv Detail & Related papers (2022-03-17T17:59:59Z) - Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD [73.55632827932101]
We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
arXiv Detail & Related papers (2021-10-26T15:02:27Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Fast and Accurate Randomized Algorithms for Low-rank Tensor
Decompositions [1.8356693937139124]
Low-rank Tucker and CP tensor decompositions are powerful tools in data analytics.
We propose a fast and accurate sketched ALS algorithm for Tucker decomposition.
It is further used to accelerate CP decomposition, by using randomized Tucker compression followed by CP decomposition of the Tucker core tensor.
arXiv Detail & Related papers (2021-04-02T15:55:02Z) - Robust Tensor Decomposition for Image Representation Based on
Generalized Correntropy [37.968665739578185]
We propose a new robust tensor decomposition method using generalized correntropy criterion (Corr-Tensor)
A Lagrange multiplier method is used to effectively optimize the generalized correntropy objective function in an iterative manner.
Experimental results demonstrated that the proposed method significantly reduces the reconstruction error on face reconstruction and improves the accuracies on handwritten digit recognition and facial image clustering.
arXiv Detail & Related papers (2020-05-10T08:46:52Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z) - Tensorized Random Projections [8.279639493543401]
We propose two tensorized random projection maps relying on the tensor train(TT) and CP decomposition format, respectively.
The two maps offer very low memory requirements and can be applied efficiently when the inputs are low rank tensors given in the CP or TT format.
Our results reveal that the TT format is substantially superior to CP in terms of the size of the random projection needed to achieve the same distortion ratio.
arXiv Detail & Related papers (2020-03-11T03:56:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.