Normalized Iterative Hard Thresholding for Tensor Recovery
- URL: http://arxiv.org/abs/2507.04228v1
- Date: Sun, 06 Jul 2025 03:36:50 GMT
- Title: Normalized Iterative Hard Thresholding for Tensor Recovery
- Authors: Li Li, Yuneng Liang, Kaijie Zheng, Jian Lu,
- Abstract summary: Low-rank recovery builds upon ideas from the theory of compressive sensing.<n>We propose a tensor extension of NIHT, referred to as TNIHT, for the recovery of low-rank tensors.
- Score: 7.5277782201584085
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Low-rank recovery builds upon ideas from the theory of compressive sensing, which predicts that sparse signals can be accurately reconstructed from incomplete measurements. Iterative thresholding-type algorithms-particularly the normalized iterative hard thresholding (NIHT) method-have been widely used in compressed sensing (CS) and applied to matrix recovery tasks. In this paper, we propose a tensor extension of NIHT, referred to as TNIHT, for the recovery of low-rank tensors under two widely used tensor decomposition models. This extension enables the effective reconstruction of high-order low-rank tensors from a limited number of linear measurements by leveraging the inherent low-dimensional structure of multi-way data. Specifically, we consider both the CANDECOMP/PARAFAC (CP) rank and the Tucker rank to characterize tensor low-rankness within the TNIHT framework. At the same time, we establish a convergence theorem for the proposed TNIHT method under the tensor restricted isometry property (TRIP), providing theoretical support for its recovery guarantees. Finally, we evaluate the performance of TNIHT through numerical experiments on synthetic, image, and video data, and compare it with several state-of-the-art algorithms.
Related papers
- Compressive Imaging Reconstruction via Tensor Decomposed Multi-Resolution Grid Encoding [50.54887630778593]
Compressive imaging (CI) reconstruction aims to recover high-dimensional images from low-dimensional measurements compressed.<n>Existing unsupervised representations may struggle to achieve a desired balance between representation ability and efficiency.<n>We propose Decomposed multi-resolution Grid encoding (GridTD), an unsupervised continuous representation framework for CI reconstruction.
arXiv Detail & Related papers (2025-07-10T12:36:20Z) - Score-Based Model for Low-Rank Tensor Recovery [49.158601255093416]
Low-rank tensor decompositions (TDs) provide an effective framework for multiway data analysis.<n>Traditional TD methods rely on predefined structural assumptions, such as CP or Tucker decompositions.<n>We propose a score-based model that eliminates the need for predefined structural or distributional assumptions.
arXiv Detail & Related papers (2025-06-27T15:05:37Z) - Low-Rank Implicit Neural Representation via Schatten-p Quasi-Norm and Jacobian Regularization [49.158601255093416]
We propose a CP-based low-rank tensor function parameterized by neural networks for implicit neural representation.<n>For smoothness, we propose a regularization term based on the spectral norm of the Jacobian and Hutchinson's trace estimator.<n>Our proposed smoothness regularization is SVD-free and avoids explicit chain rule derivations.
arXiv Detail & Related papers (2025-06-27T11:23:10Z) - Spectral-Spatial Extraction through Layered Tensor Decomposition for Hyperspectral Anomaly Detection [6.292153194561472]
Low rank tensor representation (LRTR) methods are very useful for hyperspectral anomaly detection (HAD)<n>We first apply non-negative matrix factorization (NMF) to alleviate spectral dimensionality redundancy and extract spectral anomaly.<n>We then employ LRTR to extract spatial anomaly while mitigating spatial redundancy, yielding a highly efffcient layered tensor decomposition framework for HAD.<n> Experimental results on the Airport-Beach-Urban and MVTec datasets demonstrate that our approach outperforms state-of-the-art methods in the HAD task.
arXiv Detail & Related papers (2025-03-07T07:08:14Z) - Fast and Provable Tensor-Train Format Tensor Completion via Precondtioned Riemannian Gradient Descent [4.376623639964006]
This paper investigates the low-rank tensor completion problem based on the tensor train (TT) format.<n>We propose a preconditioned gradient descent algorithm (PRGD) to solve low TT-rank tensor completion and establish its linear convergence.<n>In practical applications such as hyperspectral image completion and quantum state tomography, the PRGD algorithm significantly reduced the number of iterations, thereby substantially reducing the computational time.
arXiv Detail & Related papers (2025-01-23T05:03:50Z) - Learnable Scaled Gradient Descent for Guaranteed Robust Tensor PCA [39.084456109467204]
We propose an efficient scaled gradient descent (SGD) approach within the t-SVD framework for the first time.<n>We show that RTPCA-SGD achieves linear convergence to the true low-rank tensor at a constant rate, independent of the condition number.
arXiv Detail & Related papers (2025-01-08T15:25:19Z) - Low-rank tensor completion via tensor joint rank with logarithmic composite norm [2.5191729605585005]
A new method called the tensor joint rank with logarithmic composite norm (TJLC) method is proposed.
The proposed method achieves satisfactory recovery even when the observed information is as low as 1%, and the recovery performance improves significantly as the observed information increases.
arXiv Detail & Related papers (2023-09-28T07:17:44Z) - On High-dimensional and Low-rank Tensor Bandits [53.0829344775769]
This work studies a general tensor bandits model, where actions and system parameters are represented by tensors as opposed to vectors.
A novel bandit algorithm, coined TOFU (Tensor Optimism in the Face of Uncertainty), is developed.
Theoretical analyses show that TOFU improves the best-known regret upper bound by a multiplicative factor that grows exponentially in the system order.
arXiv Detail & Related papers (2023-05-06T00:43:36Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.