Putting a Spin on Language: A Quantum Interpretation of Unary
Connectives for Linguistic Applications
- URL: http://arxiv.org/abs/2004.04128v3
- Date: Mon, 6 Sep 2021 00:57:52 GMT
- Title: Putting a Spin on Language: A Quantum Interpretation of Unary
Connectives for Linguistic Applications
- Authors: Adriana D. Correia (Utrecht University), Henk T. C. Stoof (Utrecht
University), Michael Moortgat (Utrecht University)
- Abstract summary: Lambek Calculus relies on unary modalities to allow controlled application of structural rules.
Proposals for compositional interpretation of Lambek Calculus in the compact closed category of FVect and linear maps have been made.
Our aim is to turn the modalities into first-class citizens of the vectorial interpretation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Extended versions of the Lambek Calculus currently used in computational
linguistics rely on unary modalities to allow for the controlled application of
structural rules affecting word order and phrase structure. These controlled
structural operations give rise to derivational ambiguities that are missed by
the original Lambek Calculus or its pregroup simplification. Proposals for
compositional interpretation of extended Lambek Calculus in the compact closed
category of FVect and linear maps have been made, but in these proposals the
syntax-semantics mapping ignores the control modalities, effectively
restricting their role to the syntax. Our aim is to turn the modalities into
first-class citizens of the vectorial interpretation. Building on the
directional density matrix semantics, we extend the interpretation of the type
system with an extra spin density matrix space. The interpretation of proofs
then results in ambiguous derivations being tensored with orthogonal spin
states. Our method introduces a way of simultaneously representing co-existing
interpretations of ambiguous utterances, and provides a uniform framework for
the integration of lexical and derivational ambiguity.
Related papers
- Learning Visual-Semantic Subspace Representations for Propositional Reasoning [49.17165360280794]
We propose a novel approach for learning visual representations that conform to a specified semantic structure.
Our approach is based on a new nuclear norm-based loss.
We show that its minimum encodes the spectral geometry of the semantics in a subspace lattice.
arXiv Detail & Related papers (2024-05-25T12:51:38Z) - Bridging Continuous and Discrete Spaces: Interpretable Sentence
Representation Learning via Compositional Operations [80.45474362071236]
It is unclear whether the compositional semantics of sentences can be directly reflected as compositional operations in the embedding space.
We propose InterSent, an end-to-end framework for learning interpretable sentence embeddings.
arXiv Detail & Related papers (2023-05-24T00:44:49Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Plurality and Quantification in Graph Representation of Meaning [4.82512586077023]
Our graph language covers the essentials of natural language semantics using only monadic second-order variables.
We present a unification-based mechanism for constructing semantic graphs at a simple syntax-semantics interface.
The present graph formalism is applied to linguistic issues in distributive predication, cross-categorial conjunction, and scope permutation of quantificational expressions.
arXiv Detail & Related papers (2021-12-13T07:04:41Z) - Vector Space Semantics for Lambek Calculus with Soft Subexponentials [0.8287206589886879]
We develop a vector space semantics for Lambek Calculus with Soft Subexponentials.
We construct compositional vector interpretations for parasitic gap noun phrases and discourse units with anaphora and ellipsis.
arXiv Detail & Related papers (2021-11-22T16:39:30Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z) - A Frobenius Algebraic Analysis for Parasitic Gaps [4.254099382808598]
We identify two types of parasitic gapping where the duplication of semantic content can be confined to the lexicon.
For parasitic gaps affecting arguments of the same predicate, the polymorphism is associated with the lexical item that introduces the primary gap.
A compositional translation relates syntactic types and derivations to the interpreting compact closed category of finite dimensional vector spaces.
arXiv Detail & Related papers (2020-05-12T09:36:15Z) - Categorical Vector Space Semantics for Lambek Calculus with a Relevant
Modality [3.345437353879255]
We develop a categorical distributional semantics for Lambek Calculus with a Relevantity!L*.
We instantiate this category to finite dimensional vector spaces and linear maps via "quantisation" functors.
We apply the model to construct categorical and concrete semantic interpretations for the motivating example of!L*: the derivation of a phrase with a parasitic gap.
arXiv Detail & Related papers (2020-05-06T18:58:21Z) - APo-VAE: Text Generation in Hyperbolic Space [116.11974607497986]
In this paper, we investigate text generation in a hyperbolic latent space to learn continuous hierarchical representations.
An Adrial Poincare Variversaational Autoencoder (APo-VAE) is presented, where both the prior and variational posterior of latent variables are defined over a Poincare ball via wrapped normal distributions.
Experiments in language modeling and dialog-response generation tasks demonstrate the winning effectiveness of the proposed APo-VAE model.
arXiv Detail & Related papers (2020-04-30T19:05:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.