Alternating minimization algorithms for graph regularized tensor
completion
- URL: http://arxiv.org/abs/2008.12876v2
- Date: Sat, 11 Nov 2023 22:55:15 GMT
- Title: Alternating minimization algorithms for graph regularized tensor
completion
- Authors: Yu Guan, Shuyu Dong, Bin Gao, P.-A. Absil, Fran\c{c}ois Glineur
- Abstract summary: We consider a Canonical Polyadic (CP) decomposition approach to low-rank tensor completion (LRTC)
The usage of graph regularization entails benefits in the learning accuracy of LRTC, but at the same time, induces coupling graph Laplacian terms.
We propose efficient alternating minimization algorithms by leveraging the block structure of the underlying CP decomposition-based model.
- Score: 8.26185178671935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a Canonical Polyadic (CP) decomposition approach to low-rank
tensor completion (LRTC) by incorporating external pairwise similarity
relations through graph Laplacian regularization on the CP factor matrices. The
usage of graph regularization entails benefits in the learning accuracy of
LRTC, but at the same time, induces coupling graph Laplacian terms that hinder
the optimization of the tensor completion model. In order to solve
graph-regularized LRTC, we propose efficient alternating minimization
algorithms by leveraging the block structure of the underlying CP
decomposition-based model. For the subproblems of alternating minimization, a
linear conjugate gradient subroutine is specifically adapted to
graph-regularized LRTC. Alternatively, we circumvent the complicating coupling
effects of graph Laplacian terms by using an alternating directions method of
multipliers. Based on the Kurdyka-{\L}ojasiewicz property, we show that the
sequence generated by the proposed algorithms globally converges to a critical
point of the objective function. Moreover, the complexity and convergence rate
are also derived. In addition, numerical experiments including synthetic data
and real data show that the graph regularized tensor completion model has
improved recovery results compared to those without graph regularization, and
that the proposed algorithms achieve gains in time efficiency over existing
algorithms.
Related papers
- Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition [27.29463801531576]
We study the theoretical and algorithmic aspects of the TT-based ToT regression model.
We propose two algorithms to efficiently find solutions to constrained error bounds.
We establish the linear convergence rate of both IHT and RGD.
arXiv Detail & Related papers (2024-06-10T03:51:38Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Network Topology Inference with Sparsity and Laplacian Constraints [18.447094648361453]
We tackle the network topology by utilizing Laplacian constrained Gaussian graphical models.
We show that an efficient projection algorithm is developed to solve the resulting problem.
arXiv Detail & Related papers (2023-09-02T15:06:30Z) - An inexact LPA for DC composite optimization and application to matrix completions with outliers [5.746154410100363]
This paper concerns a class of composite optimization problems.
By leveraging the composite structure, we provide a condition for the potential function to have the KL property of $1/2$ at the iterate sequence.
arXiv Detail & Related papers (2023-03-29T16:15:34Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - From graphs to DAGs: a low-complexity model and a scalable algorithm [0.0]
This paper presents a low-complexity model, called LoRAM for Low-Rank Additive Model, which combines low-rank matrix factorization with a sparsification mechanism for the continuous optimization of DAGs.
The proposed approach achieves a reduction from a cubic complexity to quadratic complexity while handling the same DAG characteristic function as NoTears.
arXiv Detail & Related papers (2022-04-10T10:22:56Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - New Riemannian preconditioned algorithms for tensor completion via
polyadic decomposition [10.620193291237262]
These algorithms exploit a non-Euclidean metric on the product space of the factor matrices of the low-rank tensor in the polyadic decomposition form.
We prove that the proposed gradient descent algorithm globally converges to a stationary point of the tensor completion problem.
Numerical results on synthetic and real-world data suggest that the proposed algorithms are more efficient in memory and time compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2021-01-26T22:11:06Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.