Vector symbolic architectures for context-free grammars
- URL: http://arxiv.org/abs/2003.05171v2
- Date: Fri, 25 Sep 2020 08:34:46 GMT
- Title: Vector symbolic architectures for context-free grammars
- Authors: Peter beim Graben, Markus Huber, Werner Meyer, Ronald R\"omer and
Matthias Wolff
- Abstract summary: Vector symbolic architectures (VSA) are a viable approach for the hyperdimensional representation of symbolic data.
We present a rigorous framework for the representation of phrase structure trees and parse trees of context-free grammars (CFG) in Fock space.
Our approach could leverage the development of VSA for explainable artificial intelligence (XAI) by means of hyperdimensional deep neural computation.
- Score: 0.5862282909017474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background / introduction. Vector symbolic architectures (VSA) are a viable
approach for the hyperdimensional representation of symbolic data, such as
documents, syntactic structures, or semantic frames. Methods. We present a
rigorous mathematical framework for the representation of phrase structure
trees and parse trees of context-free grammars (CFG) in Fock space, i.e.
infinite-dimensional Hilbert space as being used in quantum field theory. We
define a novel normal form for CFG by means of term algebras. Using a recently
developed software toolbox, called FockBox, we construct Fock space
representations for the trees built up by a CFG left-corner (LC) parser.
Results. We prove a universal representation theorem for CFG term algebras in
Fock space and illustrate our findings through a low-dimensional principal
component projection of the LC parser states. Conclusions. Our approach could
leverage the development of VSA for explainable artificial intelligence (XAI)
by means of hyperdimensional deep neural computation. It could be of
significance for the improvement of cognitive user interfaces and other
applications of VSA in machine learning.
Related papers
- The Origins of Representation Manifolds in Large Language Models [52.68554895844062]
We show that cosine similarity in representation space may encode the intrinsic geometry of a feature through shortest, on-manifold paths.<n>The critical assumptions and predictions of the theory are validated on text embeddings and token activations of large language models.
arXiv Detail & Related papers (2025-05-23T13:31:22Z) - Categorical Schrödinger Bridge Matching [58.760054965084656]
The Schr"odinger Bridge (SB) is a powerful framework for solving generative modeling tasks such as unpaired domain translation.
We provide a theoretical and algorithmic foundation for solving SB in discrete spaces using the recently introduced Iterative Markovian Fitting (IMF) procedure.
This enables us to develop a practical computational algorithm for SB which we call Categorical Schr"odinger Bridge Matching (CSBM)
arXiv Detail & Related papers (2025-02-03T14:55:28Z) - Learning Visual-Semantic Subspace Representations [49.17165360280794]
We introduce a nuclear norm-based loss function, grounded in the same information theoretic principles that have proved effective in self-supervised learning.
We present a theoretical characterization of this loss, demonstrating that, in addition to promoting classity, it encodes the spectral geometry of the data within a subspace lattice.
arXiv Detail & Related papers (2024-05-25T12:51:38Z) - Understanding and Mitigating Hyperbolic Dimensional Collapse in Graph Contrastive Learning [70.0681902472251]
We propose a novel contrastive learning framework to learn high-quality graph embeddings in hyperbolic space.
Specifically, we design the alignment metric that effectively captures the hierarchical data-invariant information.
We show that in the hyperbolic space one has to address the leaf- and height-level uniformity related to properties of trees.
arXiv Detail & Related papers (2023-10-27T15:31:42Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - Category Theory for Quantum Natural Language Processing [0.0]
This thesis introduces quantum natural language processing (QNLP) models based on an analogy between computational linguistics and quantum mechanics.
The grammatical structure of text and sentences connects the meaning of words in the same way that entanglement structure connects the states of quantum systems.
We turn this abstract analogy into a concrete algorithm that translates the grammatical structure onto the architecture of parameterised quantum circuits.
We then use a hybrid classical-quantum algorithm to train the model so that evaluating the circuits computes the meaning of sentences in data-driven tasks.
arXiv Detail & Related papers (2022-12-13T14:38:57Z) - Geometry Interaction Knowledge Graph Embeddings [153.69745042757066]
We propose Geometry Interaction knowledge graph Embeddings (GIE), which learns spatial structures interactively between the Euclidean, hyperbolic and hyperspherical spaces.
Our proposed GIE can capture a richer set of relational information, model key inference patterns, and enable expressive semantic matching across entities.
arXiv Detail & Related papers (2022-06-24T08:33:43Z) - Incorporating Constituent Syntax for Coreference Resolution [50.71868417008133]
We propose a graph-based method to incorporate constituent syntactic structures.
We also explore to utilise higher-order neighbourhood information to encode rich structures in constituent trees.
Experiments on the English and Chinese portions of OntoNotes 5.0 benchmark show that our proposed model either beats a strong baseline or achieves new state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T07:40:42Z) - Computing on Functions Using Randomized Vector Representations [4.066849397181077]
We call this new function encoding and computing framework Vector Function Architecture (VFA)
Our analyses and results suggest that VFAs constitute a powerful new framework for representing and manipulating functions in distributed neural systems.
arXiv Detail & Related papers (2021-09-08T04:39:48Z) - A Study of Continuous Vector Representationsfor Theorem Proving [2.0518509649405106]
We develop an encoding that allows for logical properties to be preserved and is additionally reversible.
This means that the tree shape of a formula including all symbols can be reconstructed from the dense vector representation.
We propose datasets that can be used to train these syntactic and semantic properties.
arXiv Detail & Related papers (2021-01-22T15:04:54Z) - Orthologics for Cones [5.994412766684843]
In this paper we study logics for such geometric structures.
We describe an extension of minimal orthologic with a partial modularity rule that holds for closed convex cones.
This logic combines a feasible data structure (exploiting convexity/conicity) with sufficient expressivity, including full orthonegation.
arXiv Detail & Related papers (2020-08-07T13:28:27Z) - Software Language Comprehension using a Program-Derived Semantics Graph [29.098303489400394]
We present the program-derived semantics graph, a new structure to capture semantics of code.
The PSG is designed to provide a single structure for capturing program semantics at multiple levels of abstraction.
Although our exploration into the PSG is in its infancy, our early results and architectural analysis indicate it is a promising new research direction to automatically extract program semantics.
arXiv Detail & Related papers (2020-04-02T01:37:57Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z) - Embedding Graph Auto-Encoder for Graph Clustering [90.8576971748142]
Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
arXiv Detail & Related papers (2020-02-20T09:53:28Z) - Deep Metric Structured Learning For Facial Expression Recognition [58.7528672474537]
We propose a deep metric learning model to create embedded sub-spaces with a well defined structure.
A new loss function that imposes Gaussian structures on the output space is introduced to create these sub-spaces.
We experimentally demonstrate that the learned embedding can be successfully used for various applications including expression retrieval and emotion recognition.
arXiv Detail & Related papers (2020-01-18T06:23:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.