Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank
  Approximation
        - URL: http://arxiv.org/abs/2305.11495v1
- Date: Fri, 19 May 2023 07:51:36 GMT
- Title: Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank
  Approximation
- Authors: Wenjin Qin, Hailin Wang, Feng Zhang, Weijun Ma, Jianjun Wang, and
  Tingwen Huang
- Abstract summary: Two efficient low-rank approximation approaches are first devised under the order high-artd (d >= 3) T-SVD framework.
The proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
- Score: 29.486258609570545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract:   Within the tensor singular value decomposition (T-SVD) framework, existing
robust low-rank tensor completion approaches have made great achievements in
various areas of science and engineering. Nevertheless, these methods involve
the T-SVD based low-rank approximation, which suffers from high computational
costs when dealing with large-scale tensor data. Moreover, most of them are
only applicable to third-order tensors. Against these issues, in this article,
two efficient low-rank tensor approximation approaches fusing randomized
techniques are first devised under the order-d (d >= 3) T-SVD framework. On
this basis, we then further investigate the robust high-order tensor completion
(RHTC) problem, in which a double nonconvex model along with its corresponding
fast optimization algorithms with convergence guarantees are developed. To the
best of our knowledge, this is the first study to incorporate the randomized
low-rank approximation into the RHTC problem. Empirical studies on large-scale
synthetic and real tensor data illustrate that the proposed method outperforms
other state-of-the-art approaches in terms of both computational efficiency and
estimated precision.
 
      
        Related papers
        - Accelerating Large-Scale Regularized High-Order Tensor Recovery [14.523033614261628]
 Existing tensor recovery methods fail to recognize the impact of scale variations on structural characteristics.<n>Existing studies face prohibitive computational costs dealing with large-scale high-order tensor data.
 arXiv  Detail & Related papers  (2025-06-11T10:53:07Z)
- Stochastic Primal-Dual Double Block-Coordinate for Two-way Partial AUC   Maximization [56.805574957824135]
 Two-way partial AUCAUC is a critical performance metric for binary classification with imbalanced data.<n>Existing algorithms for TPAUC optimization remain under-explored.<n>We introduce two innovative double-coordinate block-coordinate algorithms for TPAUC optimization.
 arXiv  Detail & Related papers  (2025-05-28T03:55:05Z)
- A First-order Generative Bilevel Optimization Framework for Diffusion   Models [57.40597004445473]
 Diffusion models iteratively denoise data samples to synthesize high-quality outputs.
Traditional bilevel methods fail due to infinite-dimensional probability space and prohibitive sampling costs.
We formalize this challenge as a generative bilevel optimization problem.
Our first-order bilevel framework overcomes the incompatibility of conventional bilevel methods with diffusion processes.
 arXiv  Detail & Related papers  (2025-02-12T21:44:06Z)
- Guaranteed Nonconvex Low-Rank Tensor Estimation via Scaled Gradient   Descent [4.123899820318987]
 This paper develops a scaled gradient descent (ScaledGD) algorithm to directly estimate the tensor factors.
In theory, ScaledGD achieves linear convergence at a constant rate that is independent of the condition number of the ground truth low-rank tensor.
 numerical examples are provided to demonstrate the efficacy of ScaledGD in accelerating the convergence rate of ill-conditioned low-rank tensor estimation.
 arXiv  Detail & Related papers  (2025-01-03T08:26:01Z)
- Low-Multi-Rank High-Order Bayesian Robust Tensor Factorization [7.538654977500241]
 We propose a novel high-order TRPCA method, named as Low-Multi-rank High-order Robust Factorization (LMH-BRTF) within the Bayesian framework.
Specifically, we decompose the observed corrupted tensor into three parts, i.e., the low-rank component, the sparse component, and the noise component.
By constructing a low-rank model for the low-rank component based on the order-$d$ t-SVD, LMH-BRTF can automatically determine the tensor multi-rank.
 arXiv  Detail & Related papers  (2023-11-10T06:15:38Z)
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
  Matrix, Gradient, and Function [99.31457740916815]
 Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
 arXiv  Detail & Related papers  (2023-10-18T10:29:58Z)
- Truncated tensor Schatten p-norm based approach for spatiotemporal
  traffic data imputation with complicated missing patterns [77.34726150561087]
 We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
 arXiv  Detail & Related papers  (2022-05-19T08:37:56Z)
- Computationally Efficient and Statistically Optimal Robust Low-rank
  Matrix and Tensor Estimation [15.389011827844572]
 Low-rank linear shrinkage estimation undertailed noise is challenging, both computationally statistically.
We introduce a novel sub-ient (RsGrad) which is not only computationally efficient but also statistically optimal.
 arXiv  Detail & Related papers  (2022-03-02T09:05:15Z)
- Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
 We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
 arXiv  Detail & Related papers  (2021-09-23T17:38:24Z)
- Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
 We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
 arXiv  Detail & Related papers  (2021-07-02T16:21:52Z)
- High Probability Complexity Bounds for Non-Smooth Stochastic   Optimization with Heavy-Tailed Noise [51.31435087414348]
 It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
 arXiv  Detail & Related papers  (2021-06-10T17:54:21Z)
- Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
  from Incomplete Measurements [30.395874385570007]
 A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
 arXiv  Detail & Related papers  (2021-04-29T17:44:49Z)
- Enhanced nonconvex low-rank approximation of tensor multi-modes for
  tensor completion [1.3406858660972554]
 We propose a novel low-rank approximation tensor multi-modes (LRATM)
A block-bound method-based algorithm is designed to efficiently solve the proposed model.
 Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors.
 arXiv  Detail & Related papers  (2020-05-28T08:53:54Z)
- Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
  Clipping [69.9674326582747]
 We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
 arXiv  Detail & Related papers  (2020-05-21T17:05:27Z)
- Tensor denoising and completion based on ordinal observations [11.193504036335503]
 We consider the problem of low-rank tensor estimation from possibly incomplete, ordinal-valued observations.
We propose a multi-linear cumulative link model, develop a rank-constrained M-estimator, and obtain theoretical accuracy guarantees.
We show that the proposed estimator is minimax optimal under the class of low-rank models.
 arXiv  Detail & Related papers  (2020-02-16T07:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.