Optimal Sparse Singular Value Decomposition for High-dimensional High-order Data
- URL: http://arxiv.org/abs/1809.01796v2
- Date: Mon, 8 Jul 2024 02:19:52 GMT
- Title: Optimal Sparse Singular Value Decomposition for High-dimensional High-order Data
- Authors: Anru Zhang, Rungang Han,
- Abstract summary: We consider the sparse tensor singular value decomposition, which aims for reduction on high-dimensional high-order data with certain sparsity structure.
A method named Sparse Alternating Thresholding for Singular Value Decomposition (STAT-SVD) is proposed.
The proposed procedure features a novel double projection & thresholding scheme, which provides a sharp criterion for thresholding in each iteration.
- Score: 4.987670632802288
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this article, we consider the sparse tensor singular value decomposition, which aims for dimension reduction on high-dimensional high-order data with certain sparsity structure. A method named Sparse Tensor Alternating Thresholding for Singular Value Decomposition (STAT-SVD) is proposed. The proposed procedure features a novel double projection \& thresholding scheme, which provides a sharp criterion for thresholding in each iteration. Compared with regular tensor SVD model, STAT-SVD permits more robust estimation under weaker assumptions. Both the upper and lower bounds for estimation accuracy are developed. The proposed procedure is shown to be minimax rate-optimal in a general class of situations. Simulation studies show that STAT-SVD performs well under a variety of configurations. We also illustrate the merits of the proposed procedure on a longitudinal tensor dataset on European country mortality rates.
Related papers
- Distributionally Robust Optimization as a Scalable Framework to Characterize Extreme Value Distributions [22.765095010254118]
The goal of this paper is to develop distributionally robust optimization (DRO) estimators, specifically for multidimensional Extreme Value Theory (EVT) statistics.
In order to mitigate over-conservative estimates while enhancing out-of-sample performance, we study DRO estimators informed by semi-parametric max-stable constraints in the space of point processes.
Both approaches are validated using synthetically generated data, recovering prescribed characteristics, and verifying the efficacy of the proposed techniques.
arXiv Detail & Related papers (2024-07-31T19:45:27Z) - Probabilistic Reduced-Dimensional Vector Autoregressive Modeling with
Oblique Projections [0.7614628596146602]
We propose a reduced-dimensional vector autoregressive model to extract low-dimensional dynamics from noisy data.
An optimal oblique decomposition is derived for the best predictability regarding prediction error covariance.
The superior performance and efficiency of the proposed approach are demonstrated using data sets from a synthesized Lorenz system and an industrial process from Eastman Chemical.
arXiv Detail & Related papers (2024-01-14T05:38:10Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - On the Noise Sensitivity of the Randomized SVD [8.98526174345299]
The randomized singular value decomposition (R-SVD) is a popular sketching-based algorithm for efficiently computing the partial SVD of a large matrix.
We analyze the R-SVD under a low-rank signal plus noise measurement model.
The singular values produced by the R-SVD are shown to exhibit a BBP-like phase transition.
arXiv Detail & Related papers (2023-05-27T10:15:17Z) - Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank
Approximation [29.486258609570545]
Two efficient low-rank approximation approaches are first devised under the order high-artd (d >= 3) T-SVD framework.
The proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
arXiv Detail & Related papers (2023-05-19T07:51:36Z) - Greedy based Value Representation for Optimal Coordination in
Multi-agent Reinforcement Learning [64.05646120624287]
We derive the expression of the joint Q value function of LVD and MVD.
To ensure optimal consistency, the optimal node is required to be the unique STN.
Our method outperforms state-of-the-art baselines in experiments on various benchmarks.
arXiv Detail & Related papers (2022-11-22T08:14:50Z) - Numerical Optimizations for Weighted Low-rank Estimation on Language
Model [73.12941276331316]
Singular value decomposition (SVD) is one of the most popular compression methods that approximates a target matrix with smaller matrices.
Standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.
We show that our method can perform better than current SOTA methods in neural-based language models.
arXiv Detail & Related papers (2022-11-02T00:58:02Z) - Training Discrete Deep Generative Models via Gapped Straight-Through
Estimator [72.71398034617607]
We propose a Gapped Straight-Through ( GST) estimator to reduce the variance without incurring resampling overhead.
This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax.
Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks.
arXiv Detail & Related papers (2022-06-15T01:46:05Z) - Why Approximate Matrix Square Root Outperforms Accurate SVD in Global
Covariance Pooling? [59.820507600960745]
We propose a new GCP meta-layer that uses SVD in the forward pass, and Pad'e Approximants in the backward propagation to compute the gradients.
The proposed meta-layer has been integrated into different CNN models and achieves state-of-the-art performances on both large-scale and fine-grained datasets.
arXiv Detail & Related papers (2021-05-06T08:03:45Z) - Optimal High-order Tensor SVD via Tensor-Train Orthogonal Iteration [10.034394572576922]
We propose a new algorithm to estimate the low tensor-train rank structure from the noisy high-order tensor observation.
The merits of the proposed TTOI are illustrated through applications to estimation and dimension reduction of high-order Markov processes.
arXiv Detail & Related papers (2020-10-06T05:18:24Z) - Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification [93.2334223970488]
We propose two regularizers to prevent hypersphere collapse in deep SVDD.
The first regularizer is based on injecting random noise via the standard cross-entropy loss.
The second regularizer penalizes the minibatch variance when it becomes too small.
arXiv Detail & Related papers (2020-01-24T03:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.