Directional Non-Commutative Monoidal Structures with Interchange Law via Commutative Generators
- URL: http://arxiv.org/abs/2505.24533v1
- Date: Fri, 30 May 2025 12:40:01 GMT
- Title: Directional Non-Commutative Monoidal Structures with Interchange Law via Commutative Generators
- Authors: Mahesh Godavarti,
- Abstract summary: We introduce a class of algebraic structures that generalize one-dimensional monoidal systems into higher dimensions.<n>We show that the framework that unifies several well-known linear transforms in signal processing and data analysis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a novel framework consisting of a class of algebraic structures that generalize one-dimensional monoidal systems into higher dimensions by defining per-axis composition operators subject to non-commutativity and a global interchange law. These structures, defined recursively from a base case of vector-matrix pairs, model directional composition in multiple dimensions while preserving structural coherence through commutative linear operators. We show that the framework that unifies several well-known linear transforms in signal processing and data analysis. In this framework, data indices are embedded into a composite structure that decomposes into simpler components. We show that classic transforms such as the Discrete Fourier Transform (DFT), the Walsh transform, and the Hadamard transform are special cases of our algebraic structure. The framework provides a systematic way to derive these transforms by appropriately choosing vector and matrix pairs. By subsuming classical transforms within a common structure, the framework also enables the development of learnable transformations tailored to specific data modalities and tasks.
Related papers
- Generalized Linear Mode Connectivity for Transformers [87.32299363530996]
A striking phenomenon is linear mode connectivity (LMC), where independently trained models can be connected by low- or zero-loss paths.<n>Prior work has predominantly focused on neuron re-ordering through permutations, but such approaches are limited in scope.<n>We introduce a unified framework that captures four symmetry classes: permutations, semi-permutations, transformations, and general invertible maps.<n>This generalization enables, for the first time, the discovery of low- and zero-barrier linear paths between independently trained Vision Transformers and GPT-2 models.
arXiv Detail & Related papers (2025-06-28T01:46:36Z) - Directional Non-Commutative Monoidal Structures for Compositional Embeddings in Machine Learning [0.0]
We introduce a new structure for compositional embeddings built on directional non-commutative monoidal operators.<n>Our construction defines a distinct composition operator circ_i for each axis i, ensuring associative combination along each axis without imposing global commutativity.<n>All axis-specific operators commute with one another, enforcing a global interchange law that enables consistent crossaxis compositions.
arXiv Detail & Related papers (2025-05-21T13:27:14Z) - Dynamics of Transient Structure in In-Context Linear Regression Transformers [0.5242869847419834]
We show that when transformers are trained on in-context linear regression tasks with intermediate task diversity, they behave like ridge regression before specializing to the tasks in their training distribution.<n>This transition from a general solution to a specialized solution is revealed by joint trajectory principal component analysis.<n>We empirically validate this explanation by measuring the model complexity of our transformers as defined by the local learning coefficient.
arXiv Detail & Related papers (2025-01-29T16:32:14Z) - EqNIO: Subequivariant Neural Inertial Odometry [33.96552018734359]
We show that IMU data transforms equivariantly, when rotated around the gravity vector and reflected with respect to arbitrary planes parallel to gravity.
We then map the IMU data into this frame, thereby achieving an invariant canonicalization that can be directly used with off-the-shelf inertial odometry networks.
arXiv Detail & Related papers (2024-08-12T17:42:46Z) - Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic Transformations [75.14793516745374]
We propose to strengthen the structural inductive bias of a Transformer by intermediate pre-training.
Our experiments confirm that this helps with few-shot learning of syntactic tasks such as chunking.
Our analysis shows that the intermediate pre-training leads to attention heads that keep track of which syntactic transformation needs to be applied to which token.
arXiv Detail & Related papers (2024-07-05T14:29:44Z) - Inducing Systematicity in Transformers by Attending to Structurally
Quantized Embeddings [60.698130703909804]
Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset.
We propose SQ-Transformer that explicitly encourages systematicity in the embeddings and attention layers.
We show that SQ-Transformer achieves stronger compositional generalization than the vanilla Transformer on multiple low-complexity semantic parsing and machine translation datasets.
arXiv Detail & Related papers (2024-02-09T15:53:15Z) - How Do Transformers Learn In-Context Beyond Simple Functions? A Case
Study on Learning with Representations [98.7450564309923]
This paper takes initial steps on understanding in-context learning (ICL) in more complex scenarios, by studying learning with representations.
We construct synthetic in-context learning problems with a compositional structure, where the label depends on the input through a possibly complex but fixed representation function.
We show theoretically the existence of transformers that approximately implement such algorithms with mild depth and size.
arXiv Detail & Related papers (2023-10-16T17:40:49Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - Geometric Clifford Algebra Networks [53.456211342585824]
We propose Geometric Clifford Algebra Networks (GCANs) for modeling dynamical systems.
GCANs are based on symmetry group transformations using geometric (Clifford) algebras.
arXiv Detail & Related papers (2023-02-13T18:48:33Z) - Similarity Equivariant Linear Transformation of Joint Orientation-Scale
Space Representations [11.57423546614283]
Group convolution generalizes the concept to linear operations.
Group convolution that is equivariant to similarity transformation is the most general shape preserving linear operator.
We present an initial demonstration of its utility by using it to compute a shape equivariant distribution of closed contours traced by particles undergoing Brownian motion in velocity.
arXiv Detail & Related papers (2022-03-13T23:53:51Z) - Frame Averaging for Equivariant Shape Space Learning [85.42901997467754]
A natural way to incorporate symmetries in shape space learning is to ask that the mapping to the shape space (encoder) and mapping from the shape space (decoder) are equivariant to the relevant symmetries.
We present a framework for incorporating equivariance in encoders and decoders by introducing two contributions.
arXiv Detail & Related papers (2021-12-03T06:41:19Z) - Tensor Component Analysis for Interpreting the Latent Space of GANs [41.020230946351816]
This paper addresses the problem of finding interpretable directions in the latent space of pre-trained Generative Adversarial Networks (GANs)
Our scheme allows for both linear edits corresponding to the individual modes of the tensor, and non-linear ones that model the multiplicative interactions between them.
We show experimentally that we can utilise the former to better separate style- from geometry-based transformations, and the latter to generate an extended set of possible transformations.
arXiv Detail & Related papers (2021-11-23T09:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.