Bounding Real Tensor Optimizations via the Numerical Range
- URL: http://arxiv.org/abs/2212.12811v1
- Date: Sat, 24 Dec 2022 20:03:06 GMT
- Title: Bounding Real Tensor Optimizations via the Numerical Range
- Authors: Nathaniel Johnston, Logan Pipes
- Abstract summary: We show how the numerical range of a matrix can be used to bound the optimal value of certain optimization problems over real tensor product vectors.
Our bound is stronger than the trivial bounds based on eigenvalues, and can be computed significantly faster than bounds provided by semidefinite programming relaxations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show how the numerical range of a matrix can be used to bound the optimal
value of certain optimization problems over real tensor product vectors. Our
bound is stronger than the trivial bounds based on eigenvalues, and can be
computed significantly faster than bounds provided by semidefinite programming
relaxations. We discuss numerous applications to other hard linear algebra
problems, such as showing that a real subspace of matrices contains no rank-one
matrix, and showing that a linear map acting on matrices is positive.
Related papers
- Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Multiresolution kernel matrix algebra [0.0]
We show the compression of kernel matrices by means of samplets produces optimally sparse matrices in a certain S-format.
The inverse of a kernel matrix (if it exists) is compressible in the S-format as well.
The matrix algebra is justified mathematically by pseudo differential calculus.
arXiv Detail & Related papers (2022-11-21T17:50:22Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Sparse Factorization of Large Square Matrices [10.94053598642913]
In this paper, we propose to approximate a large square matrix with a product of sparse full-rank matrices.
In the approximation, our method needs only $N(log N)2$ non-zero numbers for an $Ntimes N$ full matrix.
We show that our method gives a better approximation when the approximated matrix is sparse and high-rank.
arXiv Detail & Related papers (2021-09-16T18:42:21Z) - Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors [54.936314353063494]
We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
arXiv Detail & Related papers (2021-08-08T05:28:06Z) - Boolean Matrix Factorization with SAT and MaxSAT [6.85316573653194]
We show that our approaches allow a better factorization than existing approaches while keeping reasonable times.
Our methods also allow the handling of incomplete matrices with missing entries.
arXiv Detail & Related papers (2021-06-18T12:57:46Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - A QUBO Algorithm to Compute Eigenvectors of Symmetric Matrices [0.0]
We describe an algorithm to compute the extremal eigenvalues and corresponding eigenvectors of a symmetric matrix by solving a sequence of Quadratic Binary Optimization problems.
This algorithm is robust across many different classes of symmetric matrices, can compute the eigenvector/eigenvalue pair to essentially arbitrary precision, and with minor modifications can also solve the generalized eigenvalue problem.
arXiv Detail & Related papers (2021-04-22T20:41:57Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.