Transport of Algebraic Structure to Latent Embeddings
- URL: http://arxiv.org/abs/2405.16763v1
- Date: Mon, 27 May 2024 02:24:57 GMT
- Title: Transport of Algebraic Structure to Latent Embeddings
- Authors: Samuel Pfrommer, Brendon G. Anderson, Somayeh Sojoudi,
- Abstract summary: Machine learning often aims to produce latent embeddings of inputs which lie in a larger, abstract mathematical space.
How can we learn to "union" two sets using only their latent embeddings while respecting associativity?
We propose a general procedure for parameterizing latent space operations that are provably consistent with the laws on the input space.
- Score: 8.693845596949892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning often aims to produce latent embeddings of inputs which lie in a larger, abstract mathematical space. For example, in the field of 3D modeling, subsets of Euclidean space can be embedded as vectors using implicit neural representations. Such subsets also have a natural algebraic structure including operations (e.g., union) and corresponding laws (e.g., associativity). How can we learn to "union" two sets using only their latent embeddings while respecting associativity? We propose a general procedure for parameterizing latent space operations that are provably consistent with the laws on the input space. This is achieved by learning a bijection from the latent space to a carefully designed mirrored algebra which is constructed on Euclidean space in accordance with desired laws. We evaluate these structural transport nets for a range of mirrored algebras against baselines that operate directly on the latent space. Our experiments provide strong evidence that respecting the underlying algebraic structure of the input space is key for learning accurate and self-consistent operations.
Related papers
- A Geometric Notion of Causal Probing [91.14470073637236]
In a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace.
We give a set of intrinsic criteria which characterize an ideal linear concept subspace.
We find that LEACE returns a one-dimensional subspace containing roughly half of total concept information.
arXiv Detail & Related papers (2023-07-27T17:57:57Z) - Computing equivariant matrices on homogeneous spaces for Geometric Deep Learning and Automorphic Lie Algebras [0.0]
We compute equivariant maps from a homogeneous space $G/H$ of a Lie group $G$ to a module of this group.
This work has applications in the theoretical development of geometric deep learning and also in the theory of automorphic Lie algebras.
arXiv Detail & Related papers (2023-03-13T14:32:49Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - A substructural logic for quantum measurements [1.8782750537161614]
This paper presents a substructural logic of sequents with very restricted exchange and weakening rules.
It is sound with respect to sequences of measurements of a quantic system.
arXiv Detail & Related papers (2022-12-06T09:11:42Z) - Geometry Interaction Knowledge Graph Embeddings [153.69745042757066]
We propose Geometry Interaction knowledge graph Embeddings (GIE), which learns spatial structures interactively between the Euclidean, hyperbolic and hyperspherical spaces.
Our proposed GIE can capture a richer set of relational information, model key inference patterns, and enable expressive semantic matching across entities.
arXiv Detail & Related papers (2022-06-24T08:33:43Z) - Learning Algebraic Recombination for Compositional Generalization [71.78771157219428]
We propose LeAR, an end-to-end neural model to learn algebraic recombination for compositional generalization.
Key insight is to model the semantic parsing task as a homomorphism between a latent syntactic algebra and a semantic algebra.
Experiments on two realistic and comprehensive compositional generalization demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2021-07-14T07:23:46Z) - Abelian Neural Networks [48.52497085313911]
We first construct a neural network architecture for Abelian group operations and derive a universal approximation property.
We extend it to Abelian semigroup operations using the characterization of associative symmetrics.
We train our models over fixed word embeddings and demonstrate improved performance over the original word2vec.
arXiv Detail & Related papers (2021-02-24T11:52:21Z) - Switch Spaces: Learning Product Spaces with Sparse Gating [48.591045282317424]
We propose Switch Spaces, a data-driven approach for learning representations in product space.
We introduce sparse gating mechanisms that learn to choose, combine and switch spaces.
Experiments on knowledge graph completion and item recommendations show that the proposed switch space achieves new state-of-the-art performances.
arXiv Detail & Related papers (2021-02-17T11:06:59Z) - A Universal Representation for Quantum Commuting Correlations [3.222802562733787]
We explicitly construct an Archimedean order unit space whose state space is affinely isomorphic to the set of quantum commuting correlations.
Our main results are achieved by characterizing when a finite set of positive contractions in an Archimedean order unit space can be realized as a set of projections on a Hilbert space.
arXiv Detail & Related papers (2021-02-11T03:15:47Z) - Using Deep LSD to build operators in GANs latent space with meaning in
real space [0.0]
Lack of correlation is important because it suggests that the latent space manifold is simpler to understand and manipulate.
Generative models are widely used in deep learning, e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-02-09T21:05:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.