Note on entropy dynamics in the Brownian SYK model
- URL: http://arxiv.org/abs/2011.08158v2
- Date: Tue, 9 Mar 2021 16:00:20 GMT
- Title: Note on entropy dynamics in the Brownian SYK model
- Authors: Shao-Kai Jian, Brian Swingle
- Abstract summary: We study the time evolution of R'enyi entropy in a system of two coupled Brownian SYK clusters.
The R'enyi entropy of one cluster grows linearly and then saturates to the coarse grained entropy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the time evolution of R\'enyi entropy in a system of two coupled
Brownian SYK clusters evolving from an initial product state. The R\'enyi
entropy of one cluster grows linearly and then saturates to the coarse grained
entropy. This Page curve is obtained by two different methods, a path integral
saddle point analysis and an operator dynamics analysis. Using the Brownian
character of the dynamics, we derive a master equation which controls the
operator dynamics and gives the Page curve for purity. Insight into the physics
of this complicated master equation is provided by a complementary path
integral method: replica diagonal and non-diagonal saddles are responsible for
the linear growth and saturation of R\'enyi entropy, respectively.
Related papers
- Tensor product random matrix theory [39.58317527488534]
We introduce a real-time field theory approach to the evolution of correlated quantum systems.
We describe the full range of such crossover dynamics, from initial product states to a maximum entropy ergodic state.
arXiv Detail & Related papers (2024-04-16T21:40:57Z) - Novel approach of exploring ASEP-like models through the Yang Baxter
Equation [49.1574468325115]
Ansatz of Yang Baxter Equation inspired by Bethe Ansatz treatment of ASEP spin-model.
Various classes of Hamiltonian density arriving from two types of R-Matrices are found which also appear as solutions of constant YBE.
A summary of finalised results reveals general non-hermitian spin-1/2 chain models.
arXiv Detail & Related papers (2024-03-05T17:52:20Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Chaos and localized phases in a two-body linear kicked rotor system [0.0]
We show that chaos can be induced in the integrable linear kicked rotor through interactions between the momenta of rotors.
The quantum dynamics of this chaotic model, upon variation of kicking and interaction strengths, is shown to exhibit a variety of phases.
We point out the signatures of these phases from the perspective of entanglement production in this system.
arXiv Detail & Related papers (2023-04-18T11:00:06Z) - Page curves and typical entanglement in linear optics [0.0]
We study entanglement within a set of squeezed modes that have been evolved by a random linear optical unitary.
We prove various results on the typicality of entanglement as measured by the R'enyi-2 entropy.
Our main make use of a symmetry property obeyed by the average and the variance of the entropy that dramatically simplifies the averaging over unitaries.
arXiv Detail & Related papers (2022-09-14T18:00:03Z) - Thermodynamics-informed graph neural networks [0.09332987715848712]
We propose using both geometric and thermodynamic inductive biases to improve accuracy and generalization of the resulting integration scheme.
The first is achieved with Graph Neural Networks, which induces a non-Euclidean geometrical prior and permutation invariant node and edge update functions.
The second bias is forced by learning the GENERIC structure of the problem, an extension of the Hamiltonian formalism, to model more general non-conservative dynamics.
arXiv Detail & Related papers (2022-03-03T17:30:44Z) - Geometric phase in a dissipative Jaynes-Cummings model: theoretical
explanation for resonance robustness [68.8204255655161]
We compute the geometric phases acquired in both unitary and dissipative Jaynes-Cummings models.
In the dissipative model, the non-unitary effects arise from the outflow of photons through the cavity walls.
We show the geometric phase is robust, exhibiting a vanishing correction under a non-unitary evolution.
arXiv Detail & Related papers (2021-10-27T15:27:54Z) - Bernstein-Greene-Kruskal approach for the quantum Vlasov equation [91.3755431537592]
The one-dimensional stationary quantum Vlasov equation is analyzed using the energy as one of the dynamical variables.
In the semiclassical case where quantum tunneling effects are small, an infinite series solution is developed.
arXiv Detail & Related papers (2021-02-18T20:55:04Z) - Classical Models of Entanglement in Monitored Random Circuits [0.0]
We show the evolution of entanglement entropy in quantum circuits composed of Haar-random gates and projective measurements.
We also establish a Markov model for the evolution of the zeroth R'enyi entropy and demonstrate that, in one dimension and in the limit of large local dimension, it coincides with the corresponding second-R'enyi-entropy model.
arXiv Detail & Related papers (2020-04-14T18:00:14Z) - On the Convex Behavior of Deep Neural Networks in Relation to the
Layers' Width [99.24399270311069]
We observe that for wider networks, minimizing the loss with the descent optimization maneuvers through surfaces of positive curvatures at the start and end of training, and close to zero curvatures in between.
In other words, it seems that during crucial parts of the training process, the Hessian in wide networks is dominated by the component G.
arXiv Detail & Related papers (2020-01-14T16:30:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.