On the algebraic approach to GUP in anisotropic space
- URL: http://arxiv.org/abs/2302.04170v1
- Date: Wed, 8 Feb 2023 16:22:00 GMT
- Title: On the algebraic approach to GUP in anisotropic space
- Authors: Andr\'e H. Gomes
- Abstract summary: Generalized uncertainty principle (GUP) models in anisotropic space satisfying two criteria: (i) invariance of commutators under canonical transformations, and (ii) physical independence of position and momentum on the ordering of auxiliary operators in their definitions.
As a consequence, we use these criteria to place important restrictions on what or how GUP models may be approached algebraically.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by current searches for signals of Lorentz symmetry violation in
nature and recent investigations on generalized uncertainty principle (GUP)
models in anisotropic space, in this paper we identify GUP models satisfying
two criteria: (i) invariance of commutators under canonical transformations,
and (ii) physical independence of position and momentum on the ordering of
auxiliary operators in their definitions. Compliance of these criteria is
fundamental if one wishes to unambiguously describe GUP using an algebraic
approach but, surprisingly, neither is trivially satisfied when GUP is assumed
within anisotropic space. As a consequence, we use these criteria to place
important restrictions on what or how GUP models may be approached
algebraically.
Related papers
- Remarks on the quasi-position representation in models of generalized
uncertainty principle [0.0]
This note aims to elucidate certain aspects of the quasi-position representation frequently used in the investigation of one-dimensional models.
We focus on two key points: (i) Contrary to recent claims, the quasi-position operator can possess physical significance even though it is non-Hermitian, and (ii) in the quasi-position representation, operators associated with the position behave as a derivative operator on the quasi-position coordinate.
arXiv Detail & Related papers (2023-06-20T11:46:56Z) - Invariant Causal Set Covering Machines [64.86459157191346]
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
However, the learning algorithms that produce such models are often vulnerable to spurious associations and thus, they are not guaranteed to extract causally-relevant insights.
We propose Invariant Causal Set Covering Machines, an extension of the classical Set Covering Machine algorithm for conjunctions/disjunctions of binary-valued rules that provably avoids spurious associations.
arXiv Detail & Related papers (2023-06-07T20:52:01Z) - Gauge-equivariant flow models for sampling in lattice field theories
with pseudofermions [51.52945471576731]
This work presents gauge-equivariant architectures for flow-based sampling in fermionic lattice field theories using pseudofermions as estimators for the fermionic determinant.
This is the default approach in state-of-the-art lattice field theory calculations, making this development critical to the practical application of flow models to theories such as QCD.
arXiv Detail & Related papers (2022-07-18T21:13:34Z) - Constraining GUP Models Using Limits on SME Coefficients [0.0]
Generalized uncertainty principles (GUP) and, independently, Lorentz symmetry violations are two common features in many candidate theories of quantum gravity.
A large class of both isotropic and anisotropic GUP models is shown to produce signals experimentally indistinguishable from those predicted by the Standard Model Extension.
In particular, bounds on isotropic GUP models are improved by a factor of $107$ compared to current spectroscopic bounds and anisotropic models are constrained for the first time.
arXiv Detail & Related papers (2022-05-04T13:04:51Z) - GroupifyVAE: from Group-based Definition to VAE-based Unsupervised
Representation Disentanglement [91.9003001845855]
VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias.
We address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias.
We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method.
arXiv Detail & Related papers (2021-02-20T09:49:51Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels [16.143012623830792]
Group equivariant convolutional networks (GCNNs) endow classical convolutional networks with additional symmetry priors.
Recent advances in the theoretical description of GCNNs revealed that such models can generally be understood as performing convolutions with G-steerable kernels.
arXiv Detail & Related papers (2020-10-21T12:42:23Z) - Search for Efficient Formulations for Hamiltonian Simulation of
non-Abelian Lattice Gauge Theories [0.0]
Hamiltonian formulation of lattice gauge theories (LGTs) is the most natural framework for the purpose of quantum simulation.
It remains an important task to identify the most accurate, while computationally economic, Hamiltonian formulation(s) in such theories.
This paper is a first step toward addressing this question in the case of non-Abelian LGTs.
arXiv Detail & Related papers (2020-09-24T16:44:39Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric
graphs [81.12344211998635]
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs)
We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels.
Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
arXiv Detail & Related papers (2020-03-11T17:21:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.