On the algebraic approach to GUP in anisotropic space
- URL: http://arxiv.org/abs/2302.04170v1
- Date: Wed, 8 Feb 2023 16:22:00 GMT
- Title: On the algebraic approach to GUP in anisotropic space
- Authors: Andr\'e H. Gomes
- Abstract summary: Generalized uncertainty principle (GUP) models in anisotropic space satisfying two criteria: (i) invariance of commutators under canonical transformations, and (ii) physical independence of position and momentum on the ordering of auxiliary operators in their definitions.
As a consequence, we use these criteria to place important restrictions on what or how GUP models may be approached algebraically.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by current searches for signals of Lorentz symmetry violation in
nature and recent investigations on generalized uncertainty principle (GUP)
models in anisotropic space, in this paper we identify GUP models satisfying
two criteria: (i) invariance of commutators under canonical transformations,
and (ii) physical independence of position and momentum on the ordering of
auxiliary operators in their definitions. Compliance of these criteria is
fundamental if one wishes to unambiguously describe GUP using an algebraic
approach but, surprisingly, neither is trivially satisfied when GUP is assumed
within anisotropic space. As a consequence, we use these criteria to place
important restrictions on what or how GUP models may be approached
algebraically.
Related papers
- Lie Algebra Canonicalization: Equivariant Neural Operators under arbitrary Lie Groups [11.572188414440436]
We propose Lie aLgebrA Canonicalization (LieLAC), a novel approach that exploits only the action of infinitesimal generators of the symmetry group.
operating within the framework of canonicalization, LieLAC can easily be integrated with unconstrained pre-trained models.
arXiv Detail & Related papers (2024-10-03T17:21:30Z) - Relative Representations: Topological and Geometric Perspectives [53.88896255693922]
Relative representations are an established approach to zero-shot model stitching.
We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.
Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - A Non-negative VAE:the Generalized Gamma Belief Network [49.970917207211556]
The gamma belief network (GBN) has demonstrated its potential for uncovering multi-layer interpretable latent representations in text data.
We introduce the generalized gamma belief network (Generalized GBN) in this paper, which extends the original linear generative model to a more expressive non-linear generative model.
We also propose an upward-downward Weibull inference network to approximate the posterior distribution of the latent variables.
arXiv Detail & Related papers (2024-08-06T18:18:37Z) - Remarks on the quasi-position representation in models of generalized
uncertainty principle [0.0]
This note aims to elucidate certain aspects of the quasi-position representation frequently used in the investigation of one-dimensional models.
We focus on two key points: (i) Contrary to recent claims, the quasi-position operator can possess physical significance even though it is non-Hermitian, and (ii) in the quasi-position representation, operators associated with the position behave as a derivative operator on the quasi-position coordinate.
arXiv Detail & Related papers (2023-06-20T11:46:56Z) - Gauge-equivariant flow models for sampling in lattice field theories
with pseudofermions [51.52945471576731]
This work presents gauge-equivariant architectures for flow-based sampling in fermionic lattice field theories using pseudofermions as estimators for the fermionic determinant.
This is the default approach in state-of-the-art lattice field theory calculations, making this development critical to the practical application of flow models to theories such as QCD.
arXiv Detail & Related papers (2022-07-18T21:13:34Z) - Constraining GUP Models Using Limits on SME Coefficients [0.0]
Generalized uncertainty principles (GUP) and, independently, Lorentz symmetry violations are two common features in many candidate theories of quantum gravity.
A large class of both isotropic and anisotropic GUP models is shown to produce signals experimentally indistinguishable from those predicted by the Standard Model Extension.
In particular, bounds on isotropic GUP models are improved by a factor of $107$ compared to current spectroscopic bounds and anisotropic models are constrained for the first time.
arXiv Detail & Related papers (2022-05-04T13:04:51Z) - GroupifyVAE: from Group-based Definition to VAE-based Unsupervised
Representation Disentanglement [91.9003001845855]
VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias.
We address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias.
We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method.
arXiv Detail & Related papers (2021-02-20T09:49:51Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels [16.143012623830792]
Group equivariant convolutional networks (GCNNs) endow classical convolutional networks with additional symmetry priors.
Recent advances in the theoretical description of GCNNs revealed that such models can generally be understood as performing convolutions with G-steerable kernels.
arXiv Detail & Related papers (2020-10-21T12:42:23Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.