Approximation Algorithms for Sparse Best Rank-1 Approximation to
Higher-Order Tensors
- URL: http://arxiv.org/abs/2012.03092v1
- Date: Sat, 5 Dec 2020 18:13:14 GMT
- Title: Approximation Algorithms for Sparse Best Rank-1 Approximation to
Higher-Order Tensors
- Authors: Xianpeng Mao and Yuning Yang
- Abstract summary: Sparse tensor best rank-1 approximation (BR1Approx) is a sparsity generalization of the dense tensor BR1Approx.
Four approximation algorithms are proposed, which are easily implemented, of low computational complexity.
We provide numerical experiments on synthetic and real data to illustrate the effectiveness of the proposed algorithms.
- Score: 1.827510863075184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse tensor best rank-1 approximation (BR1Approx), which is a sparsity
generalization of the dense tensor BR1Approx, and is a higher-order extension
of the sparse matrix BR1Approx, is one of the most important problems in sparse
tensor decomposition and related problems arising from statistics and machine
learning. By exploiting the multilinearity as well as the sparsity structure of
the problem, four approximation algorithms are proposed, which are easily
implemented, of low computational complexity, and can serve as initial
procedures for iterative algorithms. In addition, theoretically guaranteed
worst-case approximation lower bounds are proved for all the algorithms. We
provide numerical experiments on synthetic and real data to illustrate the
effectiveness of the proposed algorithms.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - A rank-adaptive higher-order orthogonal iteration algorithm for
truncated Tucker decomposition [3.4666021409328653]
We show that the rank-adaptive HOOI algorithm is advantageous in terms of both accuracy and efficiency.
Some further analysis on the HOOI algorithm and the classical alternating least squares method are presented to further understand why rank adaptivity can be introduced into the HOOI algorithm and how it works.
arXiv Detail & Related papers (2021-10-25T00:46:02Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - A Parameter-free Algorithm for Convex-concave Min-max Problems [33.39558541452866]
cave-free optimization algorithms refer to algorithms whose convergence rate is optimal with respect to the initial point without any learning rate to tune.
As a by-product, we utilize the parameter-free algorithm to design a new algorithm, which obtains fast rates for min-max problems with a growth condition.
arXiv Detail & Related papers (2021-02-27T18:10:06Z) - New Riemannian preconditioned algorithms for tensor completion via
polyadic decomposition [10.620193291237262]
These algorithms exploit a non-Euclidean metric on the product space of the factor matrices of the low-rank tensor in the polyadic decomposition form.
We prove that the proposed gradient descent algorithm globally converges to a stationary point of the tensor completion problem.
Numerical results on synthetic and real-world data suggest that the proposed algorithms are more efficient in memory and time compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2021-01-26T22:11:06Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - SONIA: A Symmetric Blockwise Truncated Optimization Algorithm [2.9923891863939938]
This work presents a new algorithm for empirical risk.
The algorithm bridges the gap between first- and second-order search methods by computing a second-order search-type update in one subspace, coupled with a scaled steepest descent step in the Theoretical complement.
arXiv Detail & Related papers (2020-06-06T19:28:14Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.