A Lanczos approach to the Adiabatic Gauge Potential
- URL: http://arxiv.org/abs/2302.07228v1
- Date: Tue, 14 Feb 2023 18:18:21 GMT
- Title: A Lanczos approach to the Adiabatic Gauge Potential
- Authors: Budhaditya Bhattacharjee
- Abstract summary: The Adiabatic Gauge Potential (AGP) measures the rate at which the eigensystem of Hamiltonian changes under adiabatic deformations.
We employ a version of this approach by using the Lanczos algorithm to evaluate the AGP operator in terms of Krylov vectors and the AGP norm in terms of the Lanczos coefficients.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Adiabatic Gauge Potential (AGP) measures the rate at which the
eigensystem of Hamiltonian changes under adiabatic deformations. There are many
ways of constructing the AGP operator and evaluating the AGP norm. Recently, it
was proposed that a Gram-Schmidt-type algorithm can be used to explicitly
evaluate the expression of the AGP. We employ a version of this approach by
using the Lanczos algorithm to evaluate the AGP operator in terms of Krylov
vectors and the AGP norm in terms of the Lanczos coefficients. The algorithm is
used to explicitly construct the AGP operator for some simple systems. We
derive a relation between the AGP norm and the autocorrelation function of the
deformation operator. We present a modification of the variational approach to
derive the regulated AGP norm with the least number of computational steps.
Using this, we approximate the AGP to varying degrees of success. Finally, we
compare and contrast the quantum chaos probing capacities of the AGP and
K-complexity, in view of the Operator Growth Hypothesis.
Related papers
- A numerical approach for calculating exact non-adiabatic terms in
quantum dynamics [0.0]
We present a novel approach to computing the Adiabatic Gauge Potential (AGP), which gives information on the non-adiabatic terms that arise from time dependence in the Hamiltonian.
We use this approach to study the AGP obtained for the transverse field Ising model on a variety of graphs, showing how the different underlying graph structures can give rise to very different scaling for the number of terms required in the AGP.
arXiv Detail & Related papers (2024-01-19T19:00:25Z) - A Kronecker product accelerated efficient sparse Gaussian Process
(E-SGP) for flow emulation [2.563626165548781]
This paper introduces an efficient sparse Gaussian process (E-SGP) for the surrogate modelling of fluid mechanics.
It is a further development of the approximated sparse GP algorithm, combining the concept of efficient GP (E-GP) and variational energy free sparse Gaussian process (VEF-SGP)
arXiv Detail & Related papers (2023-12-13T11:29:40Z) - Deep Transformed Gaussian Processes [0.0]
Transformed Gaussian Processes (TGPs) are processes specified by transforming samples from the joint distribution from a prior process (typically a GP) using an invertible transformation.
We propose a generalization of TGPs named Deep Transformed Gaussian Processes (DTGPs), which follows the trend of concatenating layers of processes.
Experiments conducted evaluate the proposed DTGPs in multiple regression datasets, achieving good scalability and performance.
arXiv Detail & Related papers (2023-10-27T16:09:39Z) - Interactive Segmentation as Gaussian Process Classification [58.44673380545409]
Click-based interactive segmentation (IS) aims to extract the target objects under user interaction.
Most of the current deep learning (DL)-based methods mainly follow the general pipelines of semantic segmentation.
We propose to formulate the IS task as a Gaussian process (GP)-based pixel-wise binary classification model on each image.
arXiv Detail & Related papers (2023-02-28T14:01:01Z) - Scaling Gaussian Process Optimization by Evaluating a Few Unique
Candidates Multiple Times [119.41129787351092]
We show that sequential black-box optimization based on GPs can be made efficient by sticking to a candidate solution for multiple evaluation steps.
We modify two well-established GP-Opt algorithms, GP-UCB and GP-EI to adapt rules from batched GP-Opt.
arXiv Detail & Related papers (2022-01-30T20:42:14Z) - Robust and Adaptive Temporal-Difference Learning Using An Ensemble of
Gaussian Processes [70.80716221080118]
The paper takes a generative perspective on policy evaluation via temporal-difference (TD) learning.
The OS-GPTD approach is developed to estimate the value function for a given policy by observing a sequence of state-reward pairs.
To alleviate the limited expressiveness associated with a single fixed kernel, a weighted ensemble (E) of GP priors is employed to yield an alternative scheme.
arXiv Detail & Related papers (2021-12-01T23:15:09Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Deep Gaussian Process Emulation using Stochastic Imputation [0.0]
We propose a novel deep Gaussian process (DGP) inference method for computer model emulation using imputation.
Byally imputing the latent layers, the approach transforms the DGP into the linked GP, a state-of-the-art surrogate model formed by linking a system of feed-forward coupled GPs.
arXiv Detail & Related papers (2021-07-04T10:46:23Z) - Correlating AGP on a quantum computer [0.0]
We show how AGP can be efficiently implemented on a quantum computer with circuit depth, number of CNOTs, and number of measurements being linear in system size.
Results show highly accurate ground state energies in all correlation regimes of this model Hamiltonian.
arXiv Detail & Related papers (2020-08-13T23:56:05Z) - Understanding Nesterov's Acceleration via Proximal Point Method [52.99237600875452]
The proximal point method (PPM) is often used as a building block for designing optimization algorithms.
In this work, we use the PPM method to provide conceptually simple derivations along with convergence analyses of different versions of Nesterov's accelerated gradient method (AGM)
arXiv Detail & Related papers (2020-05-17T17:17:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.