Towards Efficient and Accurate Approximation: Tensor Decomposition Based
on Randomized Block Krylov Iteration
- URL: http://arxiv.org/abs/2211.14828v1
- Date: Sun, 27 Nov 2022 13:45:28 GMT
- Title: Towards Efficient and Accurate Approximation: Tensor Decomposition Based
on Randomized Block Krylov Iteration
- Authors: Yichun Qiu, Weijun Sun, Guoxu Zhou, Qibin Zhao
- Abstract summary: This work designs an rBKI-based Tucker decomposition (rBKI-TK) for accurate approximation, together with a hierarchical tensor ring decomposition based on rBKI-TK for efficient compression of large-scale data.
Numerical experiences demonstrate the efficiency, accuracy and scalability of the proposed methods in both data compression and denoising.
- Score: 27.85452105378894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficient and accurate low-rank approximation (LRA) methods are of great
significance for large-scale data analysis. Randomized tensor decompositions
have emerged as powerful tools to meet this need, but most existing methods
perform poorly in the presence of noise interference. Inspired by the
remarkable performance of randomized block Krylov iteration (rBKI) in reducing
the effect of tail singular values, this work designs an rBKI-based Tucker
decomposition (rBKI-TK) for accurate approximation, together with a
hierarchical tensor ring decomposition based on rBKI-TK for efficient
compression of large-scale data. Besides, the error bound between the
deterministic LRA and the randomized LRA is studied. Numerical experiences
demonstrate the efficiency, accuracy and scalability of the proposed methods in
both data compression and denoising.
Related papers
- Triply Laplacian Scale Mixture Modeling for Seismic Data Noise Suppression [51.87076090814921]
Sparsity-based tensor recovery methods have shown great potential in suppressing seismic data noise.
We propose a novel triply Laplacian scale mixture (TLSM) approach for seismic data noise suppression.
arXiv Detail & Related papers (2025-02-20T08:28:01Z) - Robust Anomaly Detection via Tensor Chidori Pseudoskeleton Decomposition [0.0]
Anomaly detection plays a critical role in modern data-driven applications.
Traditional approaches, such as distance, density, or cluster-based methods, face challenges when applied to high dimensional tensor data.
This paper leverages Chidori pseudoskeleton decomposition to extract low Tucker rank structure while isolating sparse anomalies.
arXiv Detail & Related papers (2025-02-14T05:34:57Z) - Robust Representation Consistency Model via Contrastive Denoising [83.47584074390842]
randomized smoothing provides theoretical guarantees for certifying robustness against adversarial perturbations.
diffusion models have been successfully employed for randomized smoothing to purify noise-perturbed samples.
We reformulate the generative modeling task along the diffusion trajectories in pixel space as a discriminative task in the latent space.
arXiv Detail & Related papers (2025-01-22T18:52:06Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Scalable and Robust Tensor Ring Decomposition for Large-scale Data [12.02023514105999]
We propose a scalable and robust TR decomposition algorithm capable of handling large-scale tensor data with missing entries and gross corruptions.
We first develop a novel auto-weighted steepest descent method that can adaptively fill the missing entries and identify the outliers during the decomposition process.
arXiv Detail & Related papers (2023-05-15T22:08:47Z) - Hotelling Deflation on Large Symmetric Spiked Tensors [10.706763980556445]
We provide a precise characterization of the large-dimensional performance of deflation in terms of the alignments of the vectors obtained by successive rank-1 approximation.
Our analysis allows an understanding of the deflation mechanism in the presence of noise and can be exploited for designing more efficient signal estimation methods.
arXiv Detail & Related papers (2023-04-20T12:16:05Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Distributed Dynamic Safe Screening Algorithms for Sparse Regularization [73.85961005970222]
We propose a new distributed dynamic safe screening (DDSS) method for sparsity regularized models and apply it on shared-memory and distributed-memory architecture respectively.
We prove that the proposed method achieves the linear convergence rate with lower overall complexity and can eliminate almost all the inactive features in a finite number of iterations almost surely.
arXiv Detail & Related papers (2022-04-23T02:45:55Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Disentangled Information Bottleneck [22.587164077221917]
We introduce Disentangled Information Bottleneck (DisenIB) that is consistent on compressing source maximally without target prediction performance loss.
Our method is consistent on maximum compression, and performs well in terms of generalization, robustness to adversarial attack, out-of-distribution detection, and supervised disentangling.
arXiv Detail & Related papers (2020-12-14T09:44:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.