Enhanced nonconvex low-rank approximation of tensor multi-modes for
tensor completion
- URL: http://arxiv.org/abs/2005.14521v2
- Date: Mon, 22 Jun 2020 08:58:14 GMT
- Title: Enhanced nonconvex low-rank approximation of tensor multi-modes for
tensor completion
- Authors: Haijin Zeng, Xiaozhen Xie, Jifeng Ning
- Abstract summary: We propose a novel low-rank approximation tensor multi-modes (LRATM)
A block-bound method-based algorithm is designed to efficiently solve the proposed model.
Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors.
- Score: 1.3406858660972554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Higher-order low-rank tensor arises in many data processing applications and
has attracted great interests. Inspired by low-rank approximation theory,
researchers have proposed a series of effective tensor completion methods.
However, most of these methods directly consider the global low-rankness of
underlying tensors, which is not sufficient for a low sampling rate; in
addition, the single nuclear norm or its relaxation is usually adopted to
approximate the rank function, which would lead to suboptimal solution deviated
from the original one. To alleviate the above problems, in this paper, we
propose a novel low-rank approximation of tensor multi-modes (LRATM), in which
a double nonconvex $L_{\gamma}$ norm is designed to represent the underlying
joint-manifold drawn from the modal factorization factors of the underlying
tensor. A block successive upper-bound minimization method-based algorithm is
designed to efficiently solve the proposed model, and it can be demonstrated
that our numerical scheme converges to the coordinatewise minimizers. Numerical
results on three types of public multi-dimensional datasets have tested and
shown that our algorithm can recover a variety of low-rank tensors with
significantly fewer samples than the compared methods.
Related papers
- Low-Rank Tensor Learning by Generalized Nonconvex Regularization [25.115066273660478]
We study the problem of low-rank tensor learning, where only a few samples are observed the underlying tensor.
A family of non tensor learning functions are employed to characterize the low-rankness of the underlying tensor.
An algorithm designed to solve the resulting majorization-minimization is proposed.
arXiv Detail & Related papers (2024-10-24T03:33:20Z) - Irregular Tensor Low-Rank Representation for Hyperspectral Image Representation [71.69331824668954]
Low-rank tensor representation is an important approach to alleviate spectral variations.
Previous low-rank representation methods can only be applied to the regular data cubes.
We propose a novel irregular lowrank representation method that can efficiently model the irregular 3D cubes.
arXiv Detail & Related papers (2024-10-24T02:56:22Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Low-Rank Tensor Completion via Novel Sparsity-Inducing Regularizers [30.920908325825668]
To alleviate l1-norm in the low-rank tensor completion problem, non-rank surrogates/regularizers have been suggested.
These regularizers are applied to nuclear-rank restoration, and efficient algorithms based on the method of multipliers are proposed.
arXiv Detail & Related papers (2023-10-10T01:00:13Z) - Many-body Approximation for Non-negative Tensors [17.336552862741133]
We present an alternative approach to decompose non-negative tensors, called many-body approximation.
Traditional decomposition methods assume low-rankness in the representation, resulting in difficulties in global optimization and target rank selection.
arXiv Detail & Related papers (2022-09-30T09:45:43Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z) - Tensor completion using enhanced multiple modes low-rank prior and total
variation [1.3406858660972554]
We propose a novel model to recover a low-rank tensor by simultaneously performing double nuclear norm regularized low-rank matrix factorizations to the all-mode matricizations of the underlying tensor.
Subsequence convergence of our algorithm can be established, and our algorithm converges to the coordinate-wise minimizers in some mild conditions.
arXiv Detail & Related papers (2020-04-19T02:23:06Z) - Tensor denoising and completion based on ordinal observations [11.193504036335503]
We consider the problem of low-rank tensor estimation from possibly incomplete, ordinal-valued observations.
We propose a multi-linear cumulative link model, develop a rank-constrained M-estimator, and obtain theoretical accuracy guarantees.
We show that the proposed estimator is minimax optimal under the class of low-rank models.
arXiv Detail & Related papers (2020-02-16T07:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.