Algebras of actions in an agent's representations of the world
- URL: http://arxiv.org/abs/2310.01536v1
- Date: Mon, 2 Oct 2023 18:24:51 GMT
- Title: Algebras of actions in an agent's representations of the world
- Authors: Alexander Dean, Eduardo Alonso and Esther Mondragon
- Abstract summary: We use our framework to reproduce the symmetry-based representations from the symmetry-based disentangled representation learning formalism.
We then study the algebras of the transformations of worlds with features that occur in simple reinforcement learning scenarios.
Using computational methods, that we developed, we extract the algebras of the transformations of these worlds and classify them according to their properties.
- Score: 51.06229789727133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a framework to extract the algebra of the
transformations of worlds from the perspective of an agent. As a starting
point, we use our framework to reproduce the symmetry-based representations
from the symmetry-based disentangled representation learning (SBDRL) formalism
proposed by [1]; only the algebra of transformations of worlds that form groups
can be described using symmetry-based representations. We then study the
algebras of the transformations of worlds with features that occur in simple
reinforcement learning scenarios. Using computational methods, that we
developed, we extract the algebras of the transformations of these worlds and
classify them according to their properties. Finally, we generalise two
important results of SBDRL - the equivariance condition and the disentangling
definition - from only working with symmetry-based representations to working
with representations capturing the transformation properties of worlds with
transformations for any algebra. Finally, we combine our generalised
equivariance condition and our generalised disentangling definition to show
that disentangled sub-algebras can each have their own individual equivariance
conditions, which can be treated independently.
Related papers
- Relative Representations: Topological and Geometric Perspectives [53.88896255693922]
Relative representations are an established approach to zero-shot model stitching.
We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.
Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - Current Symmetry Group Equivariant Convolution Frameworks for Representation Learning [5.802794302956837]
Euclidean deep learning is often inadequate for addressing real-world signals where the representation space is irregular and curved with complex topologies.
We focus on the importance of symmetry group equivariant deep learning models and their realization of convolution-like operations on graphs, 3D shapes, and non-Euclidean spaces.
arXiv Detail & Related papers (2024-09-11T15:07:18Z) - In-Context Symmetries: Self-Supervised Learning through Contextual World Models [41.61360016455319]
We propose to learn a general representation that can adapt to be invariant or equivariant to different transformations by paying attention to context.
Our proposed algorithm, Contextual Self-Supervised Learning (ContextSSL), learns equivariance to all transformations.
arXiv Detail & Related papers (2024-05-28T14:03:52Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Equivariant Representation Learning in the Presence of Stabilizers [13.11108596589607]
EquIN is suitable for group actions that are not free, i.e., that stabilize data via nontrivial symmetries.
EquIN is theoretically grounded in the orbit-stabilizer theorem from group theory.
arXiv Detail & Related papers (2023-01-12T11:43:26Z) - Equivariant Disentangled Transformation for Domain Generalization under
Combination Shift [91.38796390449504]
Combinations of domains and labels are not observed during training but appear in the test environment.
We provide a unique formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.
arXiv Detail & Related papers (2022-08-03T12:31:31Z) - Equivariant Representation Learning via Class-Pose Decomposition [17.032782230538388]
We introduce a general method for learning representations that are equivariant to symmetries of data.
The components semantically correspond to intrinsic data classes and poses respectively.
Results show that our representations capture the geometry of data and outperform other equivariant representation learning frameworks.
arXiv Detail & Related papers (2022-07-07T06:55:52Z) - Learning Algebraic Representation for Systematic Generalization in
Abstract Reasoning [109.21780441933164]
We propose a hybrid approach to improve systematic generalization in reasoning.
We showcase a prototype with algebraic representation for the abstract spatial-temporal task of Raven's Progressive Matrices (RPM)
We show that the algebraic representation learned can be decoded by isomorphism to generate an answer.
arXiv Detail & Related papers (2021-11-25T09:56:30Z) - Capacity of Group-invariant Linear Readouts from Equivariant
Representations: How Many Objects can be Linearly Classified Under All
Possible Views? [21.06669693699965]
We find that the fraction of separable dichotomies is determined by the dimension of the space that is fixed by the group action.
We show how this relation extends to operations such as convolutions, element-wise nonlinearities, and global and local pooling.
arXiv Detail & Related papers (2021-10-14T15:46:53Z) - Learning Algebraic Recombination for Compositional Generalization [71.78771157219428]
We propose LeAR, an end-to-end neural model to learn algebraic recombination for compositional generalization.
Key insight is to model the semantic parsing task as a homomorphism between a latent syntactic algebra and a semantic algebra.
Experiments on two realistic and comprehensive compositional generalization demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2021-07-14T07:23:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.