Can one hear a matrix? Recovering a real symmetric matrix from its
spectral data
- URL: http://arxiv.org/abs/2106.01819v3
- Date: Tue, 17 Aug 2021 12:24:22 GMT
- Title: Can one hear a matrix? Recovering a real symmetric matrix from its
spectral data
- Authors: Tomasz Maci\k{a}\.zek, Uzy Smilansky
- Abstract summary: The spectrum of a real and symmetric $Ntimes N$ matrix determines the matrix up to unitary equivalence.
More spectral data is needed together with some sign indicators to remove the unitary ambiguities.
It is shown that one can optimize the ratio between redundancy and genericity by using the freedom of choice of the spectral information input.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The spectrum of a real and symmetric $N\times N$ matrix determines the matrix
up to unitary equivalence. More spectral data is needed together with some sign
indicators to remove the unitary ambiguities. In the first part of this work we
specify the spectral and sign information required for a unique reconstruction
of general matrices. More specifically, the spectral information consists of
the spectra of the $N$ nested main minors of the original matrix of the sizes
$1,2,\dots,N$. However, due to the complicated nature of the required sign
data, improvements are needed in order to make the reconstruction procedure
feasible. With this in mind, the second part is restricted to banded matrices
where the amount of spectral data exceeds the number of the unknown matrix
entries. It is shown that one can take advantage of this redundancy to
guarantee unique reconstruction of {\it generic} matrices, in other words, this
subset of matrices is open, dense and of full measure in the set of real,
symmetric and banded matrices. It is shown that one can optimize the ratio
between redundancy and genericity by using the freedom of choice of the
spectral information input. We demonstrate our constructions in detail for
pentadiagonal matrices.
Related papers
- Matrix Ordering through Spectral and Nilpotent Structures in Totally Ordered Complex Number Fields [2.2533084621250143]
We develop a total ordering relation for complex numbers, enabling comparisons of the spectral components of general matrices with complex eigenvalues.
We establish a theoretical framework for majorization ordering with complex-valued functions.
We characterize Jordan blocks of matrix functions using a generalized dominance order for nilpotent components.
arXiv Detail & Related papers (2025-01-17T23:34:17Z) - A note on MDS Property of Circulant Matrices [3.069335774032178]
In $2014$, Gupta and Ray proved that the circulant involutory matrices over the finite field $mathbbF_2m$ can not be maximum distance separable (MDS)
This article delves into circulant matrices possessing these characteristics over the finite field $mathbbF_2m$.
arXiv Detail & Related papers (2024-06-22T16:00:00Z) - On MDS Property of g-Circulant Matrices [3.069335774032178]
We first discuss $g$-circulant matrices with involutory and MDS properties.
We then delve into $g$-circulant semi-involutory and semi-orthogonal matrices with entries from finite fields.
arXiv Detail & Related papers (2024-06-22T15:18:31Z) - A Characterization of Semi-Involutory MDS Matrices [3.069335774032178]
In symmetric cryptography, maximum distance separable (MDS) matrices with computationally simple inverses have wide applications.
Many block ciphers like AES, SQUARE, SHARK, and hash functions like PHOTON use an MDS matrix in the diffusion layer.
arXiv Detail & Related papers (2024-06-18T17:57:46Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors [54.936314353063494]
We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
arXiv Detail & Related papers (2021-08-08T05:28:06Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Deep Two-way Matrix Reordering for Relational Data Analysis [41.60125423028092]
Matrix reordering is a task to permute rows and columns of a given observed matrix.
We propose a new matrix reordering method, Deep Two-way Matrix Reordering (DeepTMR), using a neural network model.
We demonstrate the effectiveness of proposed DeepTMR by applying it to both synthetic and practical data sets.
arXiv Detail & Related papers (2021-03-26T01:31:24Z) - Leveraged Matrix Completion with Noise [84.20092979053119]
We show that we can provably recover an unknown $ntimes n$ matrix of rank $r$ from just about $mathcalO(nrlog2 (n))$ entries.
Our proofs are supported by a novel approach that phrases sufficient optimality conditions based on the Golfing Scheme.
arXiv Detail & Related papers (2020-11-11T16:25:45Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.