Coefficients of almost-degenerate density matrix perturbation theory for
eigenvalue problems
- URL: http://arxiv.org/abs/2305.09026v2
- Date: Sun, 9 Jul 2023 13:39:00 GMT
- Title: Coefficients of almost-degenerate density matrix perturbation theory for
eigenvalue problems
- Authors: Charles Arnal, Louis Garrigue
- Abstract summary: We show that when several eigenvalues are close to each other, inverses of differences between eigenvalues arise as some factors.
We remove those artificial singularities in the expressions of the coefficients of the series, allowing eigenvalue gaps to be arbitrarily small.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate almost-degenerate perturbation theory of eigenvalue problems,
using spectral projectors, also named density matrices. When several
eigenvalues are close to each other, the coefficients of the perturbative
series become singular because inverses of differences between eigenvalues
arise as some factors. We remove those artificial singularities in the
expressions of the coefficients of the series, allowing eigenvalue gaps to be
arbitrarily small and even vanishing in the resulting formulas.
Related papers
- Graph-theoretical approach to the eigenvalue spectrum of perturbed higher-order exceptional points [0.0]
We advocate a graph-theoretical perspective that contributes to the understanding of perturbative effects on the eigenvalue spectrum of higher-order exceptional points.
We consider an illustrative example, a system of microrings coupled by a semi-infinite waveguide with an end mirror.
arXiv Detail & Related papers (2024-09-20T11:56:15Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Improving Expressive Power of Spectral Graph Neural Networks with Eigenvalue Correction [55.57072563835959]
spectral graph neural networks are characterized by filters.
We propose an eigenvalue correction strategy that can free filters from the constraints of repeated eigenvalue inputs.
arXiv Detail & Related papers (2024-01-28T08:12:00Z) - The Inductive Bias of Flatness Regularization for Deep Matrix
Factorization [58.851514333119255]
This work takes the first step toward understanding the inductive bias of the minimum trace of the Hessian solutions in deep linear networks.
We show that for all depth greater than one, with the standard Isometry Property (RIP) on the measurements, minimizing the trace of Hessian is approximately equivalent to minimizing the Schatten 1-norm of the corresponding end-to-end matrix parameters.
arXiv Detail & Related papers (2023-06-22T23:14:57Z) - Curvature-informed multi-task learning for graph networks [56.155331323304]
State-of-the-art graph neural networks attempt to predict multiple properties simultaneously.
We investigate a potential explanation for this phenomenon: the curvature of each property's loss surface significantly varies, leading to inefficient learning.
arXiv Detail & Related papers (2022-08-02T18:18:41Z) - Exact analytical relation between the entropies and the dominant
eigenvalue of random reduced density matrices [0.0]
In this paper, we show how the entropy (including the von Neumann entropy) obtained by tracing across various sizes of subsystems is related to their dominant eigenvalue.
The correlation between our study and entanglement generated by quantum computing is provided with various examples.
arXiv Detail & Related papers (2022-04-04T18:00:05Z) - An ubiquitous three-term recurrence relation [0.0]
We solve an eigenvalue equation that appears in several papers about a wide range of physical problems.
We compare the resulting eigenvalues with those provided by the truncation condition.
In this way we prove that those physical predictions are merely artifacts of the truncation condition.
arXiv Detail & Related papers (2021-10-25T20:00:31Z) - Minimax Estimation of Linear Functions of Eigenvectors in the Face of
Small Eigen-Gaps [95.62172085878132]
Eigenvector perturbation analysis plays a vital role in various statistical data science applications.
We develop a suite of statistical theory that characterizes the perturbation of arbitrary linear functions of an unknown eigenvector.
In order to mitigate a non-negligible bias issue inherent to the natural "plug-in" estimator, we develop de-biased estimators.
arXiv Detail & Related papers (2021-04-07T17:55:10Z) - Gross misinterpretation of a conditionally solvable eigenvalue equation [0.0]
We solve an eigenvalue equation that appears in several papers about a wide range of physical problems.
We compare the resulting eigenvalues with those provided by the truncation condition.
In this way we prove that those physical predictions are merely artifacts of the truncation condition.
arXiv Detail & Related papers (2020-11-12T15:08:11Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.