Matrix logistic map: fractal spectral distributions and transfer of
chaos
- URL: http://arxiv.org/abs/2303.06176v1
- Date: Fri, 10 Mar 2023 19:19:56 GMT
- Title: Matrix logistic map: fractal spectral distributions and transfer of
chaos
- Authors: {\L}ukasz Pawela and Karol \.Zyczkowski
- Abstract summary: We show that for an arbitrary initial ensemble of hermitian random matrices with a continuous level density supported on the interval $[0,1]$, the level density converges to the invariant measure of the logistic map.
This approach generalizes the known model of coupled logistic maps, and allows us to study the transition to chaos in complex networks and multidimensional systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The standard logistic map, $x'=ax(1-x)$, serves as a paradigmatic model to
demonstrate how apparently simple non-linear equations lead to complex and
chaotic dynamics. In this work we introduce and investigate its matrix analogue
defined for an arbitrary matrix $X$ of a given order $N$. We show that for an
arbitrary initial ensemble of hermitian random matrices with a continuous level
density supported on the interval $[0,1]$, the asymptotic level density
converges to the invariant measure of the logistic map. Depending on the
parameter $a$ the constructed measure may be either singular, fractal or
described by a continuous density. In a broader class of the map multiplication
by a scalar logistic parameter $a$ is replaced by transforming
$aX(\mathbb{I}-X)$ into $BX(\mathbb{I}-X)B^{\dagger}$, where $A=BB^{\dagger}$
is a fixed positive matrix of order $N$. This approach generalizes the known
model of coupled logistic maps, and allows us to study the transition to chaos
in complex networks and multidimensional systems. In particular, associating
the matrix $B$ with a given graph we demonstrate the gradual transfer of chaos
between subsystems corresponding to vertices of a graph and coupled according
to its edges.
Related papers
- Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Convergence of a Normal Map-based Prox-SGD Method under the KL
Inequality [0.0]
We present a novel map-based algorithm ($mathsfnorMtext-mathsfSGD$) for $symbol$k$ convergence problems.
arXiv Detail & Related papers (2023-05-10T01:12:11Z) - Near-optimal fitting of ellipsoids to random points [68.12685213894112]
A basic problem of fitting an ellipsoid to random points has connections to low-rank matrix decompositions, independent component analysis, and principal component analysis.
We resolve this conjecture up to logarithmic factors by constructing a fitting ellipsoid for some $n = Omega(, d2/mathrmpolylog(d),)$.
Our proof demonstrates feasibility of the least squares construction of Saunderson et al. using a convenient decomposition of a certain non-standard random matrix.
arXiv Detail & Related papers (2022-08-19T18:00:34Z) - Perturbation Analysis of Randomized SVD and its Applications to
High-dimensional Statistics [8.90202564665576]
We study the statistical properties of RSVD under a general "signal-plus-noise" framework.
We derive nearly-optimal performance guarantees for RSVD when applied to three statistical inference problems.
arXiv Detail & Related papers (2022-03-19T07:26:45Z) - Hybrid Model-based / Data-driven Graph Transform for Image Coding [54.31406300524195]
We present a hybrid model-based / data-driven approach to encode an intra-prediction residual block.
The first $K$ eigenvectors of a transform matrix are derived from a statistical model, e.g., the asymmetric discrete sine transform (ADST) for stability.
Using WebP as a baseline image, experimental results show that our hybrid graph transform achieved better energy compaction than default discrete cosine transform (DCT) and better stability than KLT.
arXiv Detail & Related papers (2022-03-02T15:36:44Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Algebraic and geometric structures inside the Birkhoff polytope [0.0]
Birkhoff polytope $mathcalB_d$ consists of all bistochastic matrices of order $d$.
We prove that $mathcalL_d$ and $mathcalF_d$ are star-shaped with respect to the flat matrix.
arXiv Detail & Related papers (2021-01-27T09:51:24Z) - Cospectrality preserving graph modifications and eigenvector properties
via walk equivalence of vertices [0.0]
Cospectrality is a powerful generalization of exchange symmetry and can be applied to all real-valued symmetric matrices.
We show that the powers of a matrix with cospectral vertices induce further local relations on its eigenvectors.
Our work paves the way for flexibly exploiting hidden structural symmetries in the design of generic complex network-like systems.
arXiv Detail & Related papers (2020-07-15T10:54:31Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - Phase retrieval in high dimensions: Statistical and computational phase
transitions [27.437775143419987]
We consider the problem of reconstructing a $mathbfXstar$ from $m$ (possibly noisy) observations.
In particular, the information-theoretic transition to perfect recovery for full-rank matrices appears at $alpha=1$ and $alpha=2$.
Our work provides an extensive classification of the statistical and algorithmic thresholds in high-dimensional phase retrieval.
arXiv Detail & Related papers (2020-06-09T13:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.