Discovering Abstract Symbolic Relations by Learning Unitary Group Representations
- URL: http://arxiv.org/abs/2402.17002v4
- Date: Wed, 22 May 2024 23:02:44 GMT
- Title: Discovering Abstract Symbolic Relations by Learning Unitary Group Representations
- Authors: Dongsung Huh,
- Abstract summary: We investigate a principled approach for symbolic operation completion (SOC)
SOC poses a unique challenge in modeling abstract relationships between discrete symbols.
We demonstrate that SOC can be efficiently solved by a minimal model - a bilinear map - with a novel factorized architecture.
- Score: 7.303827428956944
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We investigate a principled approach for symbolic operation completion (SOC), a minimal task for studying symbolic reasoning. While conceptually similar to matrix completion, SOC poses a unique challenge in modeling abstract relationships between discrete symbols. We demonstrate that SOC can be efficiently solved by a minimal model - a bilinear map - with a novel factorized architecture. Inspired by group representation theory, this architecture leverages matrix embeddings of symbols, modeling each symbol as an operator that dynamically influences others. Our model achieves perfect test accuracy on SOC with comparable or superior sample efficiency to Transformer baselines across most datasets, while boasting significantly faster learning speeds (100-1000$\times$). Crucially, the model exhibits an implicit bias towards learning general group structures, precisely discovering the unitary representations of underlying groups. This remarkable property not only confers interpretability but also significant implications for automatic symmetry discovery in geometric deep learning. Overall, our work establishes group theory as a powerful guiding principle for discovering abstract algebraic structures in deep learning, and showcases matrix representations as a compelling alternative to traditional vector embeddings for modeling symbolic relationships.
Related papers
- Geometric Relational Embeddings [19.383110247906256]
We propose relational embeddings, a paradigm of embeddings that respect the underlying symbolic structures.
Results obtained from benchmark real-world datasets demonstrate the efficacy of geometric relational embeddings.
arXiv Detail & Related papers (2024-09-18T22:02:24Z) - LARS-VSA: A Vector Symbolic Architecture For Learning with Abstract Rules [1.3049516752695616]
We propose a "relational bottleneck" that separates object-level features from abstract rules, allowing learning from limited amounts of data.
We adapt the "relational bottleneck" strategy to a high-dimensional space, incorporating explicit vector binding operations between symbols and relational representations.
Our system benefits from the low overhead of operations in hyperdimensional space, making it significantly more efficient than the state of the art when evaluated on a variety of test datasets.
arXiv Detail & Related papers (2024-05-23T11:05:42Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Learning Hierarchical Relational Representations through Relational Convolutions [2.5322020135765464]
We introduce "relational convolutional networks", a neural architecture equipped with computational mechanisms that capture progressively more complex relational features.
A key component of this framework is a novel operation that captures relational patterns in groups of objects by convolving graphlet filters.
We present the motivation and details of the architecture, together with a set of experiments to demonstrate how relational convolutional networks can provide an effective framework for modeling relational tasks that have hierarchical structure.
arXiv Detail & Related papers (2023-10-05T01:22:50Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - DensE: An Enhanced Non-commutative Representation for Knowledge Graph
Embedding with Adaptive Semantic Hierarchy [4.607120217372668]
We develop a novel knowledge graph embedding method, named DensE, to provide an improved modeling scheme for the complex composition patterns of relations.
Our method decomposes each relation into an SO(3) group-based rotation operator and a scaling operator in the three dimensional (3-D) Euclidean space.
Experimental results on multiple benchmark knowledge graphs show that DensE outperforms the current state-of-the-art models for missing link prediction.
arXiv Detail & Related papers (2020-08-11T06:45:50Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Lattice Representation Learning [6.427169570069738]
We introduce theory and algorithms for learning discrete representations that take on a lattice that is embedded in an Euclidean space.
Lattice representations possess an interesting combination of properties: a) they can be computed explicitly using lattice quantization, yet they can be learned efficiently using the ideas we introduce.
This article will focus on laying the groundwork for exploring and exploiting the first two properties, including a new mathematical result linking expressions used during training and inference time and experimental validation on two popular datasets.
arXiv Detail & Related papers (2020-06-24T16:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.