Lattice fermions with solvable wide range interactions
- URL: http://arxiv.org/abs/2410.08467v1
- Date: Fri, 11 Oct 2024 02:37:22 GMT
- Title: Lattice fermions with solvable wide range interactions
- Authors: Ryu Sasaki,
- Abstract summary: The exact solvability of $mathcalHR$ warrants a spinless lattice fermion $c_x$, $c_xdagger$, $mathcalHR_f=sum_x,yinmathcalXc_xdaggermathcalHR(x,y) c_y$ based on the principle advocated recently by myself.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Exactly solvable (spinless) lattice fermions with wide range interactions are constructed explicitly based on {\em exactly solvable stationary and reversible Markov chains} $\mathcal{K}^R$ reported a few years earlier by Odake and myself. The reversibility of $\mathcal{K}^R$ with the stationary distribution $\pi$ leads to a positive classical Hamiltonian $\mathcal{H}^R$. The exact solvability of $\mathcal{H}^R$ warrants that of a spinless lattice fermion $c_x$, $c_x^\dagger$, $\mathcal{H}^R_f=\sum_{x,y\in\mathcal{X}}c_x^\dagger\mathcal{H}^R(x,y) c_y$ based on the principle advocated recently by myself. The reversible Markov chains $\mathcal{K}^R$ are constructed by convolutions of the orthogonality measures of the discrete orthogonal polynomials of Askey scheme. Several explicit examples of the fermion systems with wide range interactions are presented.
Related papers
- Near-Optimal and Tractable Estimation under Shift-Invariance [0.21756081703275998]
Class of all such signals is but extremely rich: it contains all exponential oscillations over $mathbbCn$ with total degree $s$.
We show that the statistical complexity of this class, as measured by the radius squared minimax frequencies of the $(delta)$-confidence $ell$-ball, is nearly the same as for the class of $s$-sparse signals, namely $Oleft(slog(en) + log(delta-1)right) cdot log(en/s)
arXiv Detail & Related papers (2024-11-05T18:11:23Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - On the Self-Penalization Phenomenon in Feature Selection [69.16452769334367]
We describe an implicit sparsity-inducing mechanism based on over a family of kernels.
As an application, we use this sparsity-inducing mechanism to build algorithms consistent for feature selection.
arXiv Detail & Related papers (2021-10-12T09:36:41Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Kernel Thinning [26.25415159542831]
kernel thinning is a new procedure for compressing a distribution $mathbbP$ more effectively than i.i.d. sampling or standard thinning.
We derive explicit non-asymptotic maximum mean discrepancy bounds for Gaussian, Mat'ern, and B-spline kernels.
arXiv Detail & Related papers (2021-05-12T17:56:42Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - Data-driven Efficient Solvers for Langevin Dynamics on Manifold in High
Dimensions [12.005576001523515]
We study the Langevin dynamics of a physical system with manifold structure $mathcalMsubsetmathbbRp$
We leverage the corresponding Fokker-Planck equation on the manifold $mathcalN$ in terms of the reaction coordinates $mathsfy$.
We propose an implementable, unconditionally stable, data-driven finite volume scheme for this Fokker-Planck equation.
arXiv Detail & Related papers (2020-05-22T16:55:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.