Graph-Theoretic Analysis of $n$-Replica Time Evolution in the Brownian Gaussian Unitary Ensemble
- URL: http://arxiv.org/abs/2502.09681v1
- Date: Thu, 13 Feb 2025 12:24:50 GMT
- Title: Graph-Theoretic Analysis of $n$-Replica Time Evolution in the Brownian Gaussian Unitary Ensemble
- Authors: Tingfei Li, Cheng Peng, Jianghui Yu,
- Abstract summary: We investigate the $n$-replica time evolution operator $mathcalU_n(t)equiv emathcalL_nt $ for the Brownian Gaussian Unitary Ensemble (BGUE) using a graph-theoretic approach.
Explicit representations for the cases of $n = 2$ and $n = 3$ are derived, emphasizing the role of graph categorization in simplifying calculations.
- Score: 3.9404852133765083
- License:
- Abstract: In this paper, we investigate the $n$-replica time evolution operator $\mathcal{U}_n(t)\equiv e^{\mathcal{L}_nt} $ for the Brownian Gaussian Unitary Ensemble (BGUE) using a graph-theoretic approach. We examine the moments of the generating operator $\mathcal{L}_n$, which governs the Euclidean time evolution within an auxiliary $D^{2n}$-dimensional Hilbert space, where $D$ represents the dimension of the Hilbert space for the original system. Explicit representations for the cases of $n = 2$ and $n = 3$ are derived, emphasizing the role of graph categorization in simplifying calculations. Furthermore, we present a general approach to streamline the calculation of time evolution for arbitrary $n$, supported by a detailed example of $n = 4$. Our results demonstrate that the $n$-replica framework not only facilitates the evaluation of various observables but also provides valuable insights into the relationship between Brownian disordered systems and quantum information theory.
Related papers
- Second quantization for classical nonlinear dynamics [0.0]
We propose a framework for representing the evolution of observables of measure-preserving ergodic flows through infinite-dimensional rotation systems on tori.
We show that their Banach algebra spectra, $sigma(F_w(mathcal H_tau)$, decompose into a family of tori of potentially infinite dimension.
Our scheme also employs a procedure for representing observables of the original system by reproducing functions on finite-dimensional tori in $sigma(F_w(mathcal H_tau)$ of arbitrarily large degree.
arXiv Detail & Related papers (2025-01-13T15:36:53Z) - Tensor network approximation of Koopman operators [0.0]
We propose a framework for approximating the evolution of observables of measure-preserving ergodic systems.
Our approach is based on a spectrally-convergent approximation of the skew-adjoint Koopman generator.
A key feature of this quantum-inspired approximation is that it captures information from a tensor product space of dimension $(2d+1)n$.
arXiv Detail & Related papers (2024-07-09T21:40:14Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Ground State Preparation via Qubitization [0.0]
We describe a protocol for preparing the ground state of a Hamiltonian $H$ on a quantum computer.
The method relies on the so-called qubitization'' procedure of Low and Chuang.
We illustrate our method on two models: the transverse field Ising model and a single qubit toy model.
arXiv Detail & Related papers (2023-06-26T18:20:48Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Statistical Learning under Heterogeneous Distribution Shift [71.8393170225794]
Ground-truth predictor is additive $mathbbE[mathbfz mid mathbfx,mathbfy] = f_star(mathbfx) +g_star(mathbfy)$.
arXiv Detail & Related papers (2023-02-27T16:34:21Z) - $TimeEvolver$: A Program for Time Evolution With Improved Error Bound [0.0]
We present $TimeEvolver$, a program for computing time evolution in a generic quantum system.
It relies on Krylov subspace techniques to tackle the problem of multiplying the exponential of a large sparse matrix $i H$.
The fact that $H$ is Hermitian makes it possible to provide an easily computable bound on the accuracy of the Krylov approximation.
arXiv Detail & Related papers (2022-05-30T18:00:16Z) - Quantum double aspects of surface code models [77.34726150561087]
We revisit the Kitaev model for fault tolerant quantum computing on a square lattice with underlying quantum double $D(G)$ symmetry.
We show how our constructions generalise to $D(H)$ models based on a finite-dimensional Hopf algebra $H$.
arXiv Detail & Related papers (2021-06-25T17:03:38Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.