Towards logical negation for compositional distributional semantics
- URL: http://arxiv.org/abs/2005.04929v1
- Date: Mon, 11 May 2020 08:51:30 GMT
- Title: Towards logical negation for compositional distributional semantics
- Authors: Martha Lewis
- Abstract summary: The categorical compositional distributional model of meaning gives the composition of words into phrases and sentences pride of place.
This paper gives some steps towards providing this operator, modelling it as a version of projection onto the subspace to a word.
- Score: 2.449372198427156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The categorical compositional distributional model of meaning gives the
composition of words into phrases and sentences pride of place. However, it has
so far lacked a model of logical negation. This paper gives some steps towards
providing this operator, modelling it as a version of projection onto the
subspace orthogonal to a word. We give a small demonstration of the operators
performance in a sentence entailment task.
Related papers
- Optimal synthesis embeddings [1.565361244756411]
We introduce a word embedding composition method based on the intuitive idea that a fair embedding representation for a given set of words should satisfy.
We show that our approach excels in solving probing tasks designed to capture simple linguistic features of sentences.
arXiv Detail & Related papers (2024-06-10T18:06:33Z) - Learning Visual-Semantic Subspace Representations for Propositional Reasoning [49.17165360280794]
We propose a novel approach for learning visual representations that conform to a specified semantic structure.
Our approach is based on a new nuclear norm-based loss.
We show that its minimum encodes the spectral geometry of the semantics in a subspace lattice.
arXiv Detail & Related papers (2024-05-25T12:51:38Z) - Conjunct Resolution in the Face of Verbal Omissions [51.220650412095665]
We propose a conjunct resolution task that operates directly on the text and makes use of a split-and-rephrase paradigm in order to recover the missing elements in the coordination structure.
We curate a large dataset, containing over 10K examples of naturally-occurring verbal omissions with crowd-sourced annotations.
We train various neural baselines for this task, and show that while our best method obtains decent performance, it leaves ample space for improvement.
arXiv Detail & Related papers (2023-05-26T08:44:02Z) - Bridging Continuous and Discrete Spaces: Interpretable Sentence
Representation Learning via Compositional Operations [80.45474362071236]
It is unclear whether the compositional semantics of sentences can be directly reflected as compositional operations in the embedding space.
We propose InterSent, an end-to-end framework for learning interpretable sentence embeddings.
arXiv Detail & Related papers (2023-05-24T00:44:49Z) - Semantic Operator Prediction and Applications [0.0]
QDMR formalism in semantic parsing is implemented using sequence to sequence model with attention but uses only part of speech(POS) as a representation of words of a sentence to make the training as simple and as fast as possible.
arXiv Detail & Related papers (2023-01-01T13:20:57Z) - Fuzzy Generalised Quantifiers for Natural Language in Categorical
Compositional Distributional Semantics [5.2424255020469595]
We consider fuzzy versions of quantifiers along the lines of Zadeh.
We show that this category is a concrete instantiation of the compositional distributional model.
arXiv Detail & Related papers (2021-09-23T09:15:15Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z) - Assessing Phrasal Representation and Composition in Transformers [13.460125148455143]
Deep transformer models have pushed performance on NLP tasks to new limits.
We present systematic analysis of phrasal representations in state-of-the-art pre-trained transformers.
We find that phrase representation in these models relies heavily on word content, with little evidence of nuanced composition.
arXiv Detail & Related papers (2020-10-08T04:59:39Z) - Learning Probabilistic Sentence Representations from Paraphrases [47.528336088976744]
We define probabilistic models that produce distributions for sentences.
We train our models on paraphrases and demonstrate that they naturally capture sentence specificity.
Our model captures sentential entailment and provides ways to analyze the specificity and preciseness of individual words.
arXiv Detail & Related papers (2020-05-16T21:10:28Z) - Categorical Vector Space Semantics for Lambek Calculus with a Relevant
Modality [3.345437353879255]
We develop a categorical distributional semantics for Lambek Calculus with a Relevantity!L*.
We instantiate this category to finite dimensional vector spaces and linear maps via "quantisation" functors.
We apply the model to construct categorical and concrete semantic interpretations for the motivating example of!L*: the derivation of a phrase with a parasitic gap.
arXiv Detail & Related papers (2020-05-06T18:58:21Z) - Multi-Step Inference for Reasoning Over Paragraphs [95.91527524872832]
Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives.
We present a compositional model reminiscent of neural module networks that can perform chained logical reasoning.
arXiv Detail & Related papers (2020-04-06T21:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.