Multi-Tensor Network Representation for High-Order Tensor Completion
- URL: http://arxiv.org/abs/2109.04022v1
- Date: Thu, 9 Sep 2021 03:50:19 GMT
- Title: Multi-Tensor Network Representation for High-Order Tensor Completion
- Authors: Chang Nie, Huan Wang, Zhihui Lai
- Abstract summary: This work studies the problem of high-dimensional data (referred to tensors) completion from partially observed samplings.
We consider that a tensor is a superposition of multiple low-rank components.
In this paper, we propose a fundamental tensor decomposition framework: Multi-Tensor Network decomposition (MTNR)
- Score: 25.759851542474447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work studies the problem of high-dimensional data (referred to tensors)
completion from partially observed samplings. We consider that a tensor is a
superposition of multiple low-rank components. In particular, each component
can be represented as multilinear connections over several latent factors and
naturally mapped to a specific tensor network (TN) topology. In this paper, we
propose a fundamental tensor decomposition (TD) framework: Multi-Tensor Network
Representation (MTNR), which can be regarded as a linear combination of a range
of TD models, e.g., CANDECOMP/PARAFAC (CP) decomposition, Tensor Train (TT),
and Tensor Ring (TR). Specifically, MTNR represents a high-order tensor as the
addition of multiple TN models, and the topology of each TN is automatically
generated instead of manually pre-designed. For the optimization phase, an
adaptive topology learning (ATL) algorithm is presented to obtain latent
factors of each TN based on a rank incremental strategy and a projection error
measurement strategy. In addition, we theoretically establish the fundamental
multilinear operations for the tensors with TN representation, and reveal the
structural transformation of MTNR to a single TN. Finally, MTNR is applied to a
typical task, tensor completion, and two effective algorithms are proposed for
the exact recovery of incomplete data based on the Alternating Least Squares
(ALS) scheme and Alternating Direction Method of Multiplier (ADMM) framework.
Extensive numerical experiments on synthetic data and real-world datasets
demonstrate the effectiveness of MTNR compared with the start-of-the-art
methods.
Related papers
- Tensorized LSSVMs for Multitask Regression [48.844191210894245]
Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement.
New MTL is proposed by leveraging low-rank tensor analysis and Least Squares Support Vectorized Least Squares Support Vectorized tLSSVM-MTL.
arXiv Detail & Related papers (2023-03-04T16:36:03Z) - Latent Matrices for Tensor Network Decomposition and to Tensor
Completion [8.301418317685906]
We propose a novel higher-order tensor decomposition model that decomposes the tensor into smaller ones and speeds up the computation of the algorithm.
Three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task.
Experimental results show that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
arXiv Detail & Related papers (2022-10-07T08:19:50Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Multi-mode Tensor Train Factorization with Spatial-spectral
Regularization for Remote Sensing Images Recovery [1.3272510644778104]
We propose a novel low-MTT-rank tensor completion model via multi-mode TT factorization and spatial-spectral smoothness regularization.
We show that the proposed MTTD3R method outperforms compared methods in terms of visual and quantitative measures.
arXiv Detail & Related papers (2022-05-05T07:36:08Z) - Tensor Full Feature Measure and Its Nonconvex Relaxation Applications to
Tensor Recovery [1.8899300124593645]
We propose a new tensor sparsity measure called Full Feature Measure (FFM)
It can simultaneously describe the feature dimension each dimension, and connect the Tucker rank with the tensor tube rank.
Two efficient models based on FFM are proposed, and two Alternating Multiplier Method (ADMM) algorithms are developed to solve the proposed model.
arXiv Detail & Related papers (2021-09-25T01:44:34Z) - Residual Tensor Train: a Flexible and Efficient Approach for Learning
Multiple Multilinear Correlations [4.754987078078158]
In this paper, we present a novel Residual Train (ResTT) which integrates the merits of TT and residual structure.
In particular, we prove that the fully-connected layer in neural networks and the Volterra series can be taken as special cases of ResTT.
We prove that such a rule is much more relaxed than that of TT, which means ResTT can easily address the vanishing and exploding gradient problem.
arXiv Detail & Related papers (2021-08-19T12:47:16Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - A Fully Tensorized Recurrent Neural Network [48.50376453324581]
We introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell.
This approach reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs.
arXiv Detail & Related papers (2020-10-08T18:24:12Z) - Adaptive Learning of Tensor Network Structures [6.407946291544721]
We leverage the TN formalism to develop a generic and efficient adaptive algorithm to learn the structure and the parameters of a TN from data.
Our algorithm can adaptively identify TN structures with small number of parameters that effectively optimize any differentiable objective function.
arXiv Detail & Related papers (2020-08-12T16:41:56Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.