Distributional Formal Semantics
- URL: http://arxiv.org/abs/2103.01713v1
- Date: Tue, 2 Mar 2021 13:38:00 GMT
- Title: Distributional Formal Semantics
- Authors: Noortje J. Venhuizen and Petra Hendriks and Matthew W. Crocker and
Harm Brouwer
- Abstract summary: We propose a Distributional Formal Semantics that integrates distributionality into a formal semantic system on the level of formal models.
This approach offers probabilistic, distributed meaning representations that are also inherently compositional.
We show how these representations allow for probabilistic inference, and how the information-theoretic notion of "information" naturally follows from it.
- Score: 0.18352113484137625
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Natural language semantics has recently sought to combine the complementary
strengths of formal and distributional approaches to meaning. More
specifically, proposals have been put forward to augment formal semantic
machinery with distributional meaning representations, thereby introducing the
notion of semantic similarity into formal semantics, or to define
distributional systems that aim to incorporate formal notions such as
entailment and compositionality. However, given the fundamentally different
'representational currency' underlying formal and distributional approaches -
models of the world versus linguistic co-occurrence - their unification has
proven extremely difficult. Here, we define a Distributional Formal Semantics
that integrates distributionality into a formal semantic system on the level of
formal models. This approach offers probabilistic, distributed meaning
representations that are also inherently compositional, and that naturally
capture fundamental semantic notions such as quantification and entailment.
Furthermore, we show how the probabilistic nature of these representations
allows for probabilistic inference, and how the information-theoretic notion of
"information" (measured in terms of Entropy and Surprisal) naturally follows
from it. Finally, we illustrate how meaning representations can be derived
incrementally from linguistic input using a recurrent neural network model, and
how the resultant incremental semantic construction procedure intuitively
captures key semantic phenomena, including negation, presupposition, and
anaphoricity.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Learning Visual-Semantic Subspace Representations for Propositional Reasoning [49.17165360280794]
We propose a novel approach for learning visual representations that conform to a specified semantic structure.
Our approach is based on a new nuclear norm-based loss.
We show that its minimum encodes the spectral geometry of the semantics in a subspace lattice.
arXiv Detail & Related papers (2024-05-25T12:51:38Z) - A Note on an Inferentialist Approach to Resource Semantics [48.65926948745294]
'Inferentialism' is the view that meaning is given in terms of inferential behaviour.
This paper shows how 'inferentialism' enables a versatile and expressive framework for resource semantics.
arXiv Detail & Related papers (2024-05-10T14:13:21Z) - Grounded learning for compositional vector semantics [1.4344589271451351]
This work proposes a way for compositional distributional semantics to be implemented within a spiking neural network architecture.
We also describe a means of training word representations using labelled images.
arXiv Detail & Related papers (2024-01-10T22:12:34Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Multi-Relational Hyperbolic Word Embeddings from Natural Language
Definitions [5.763375492057694]
This paper presents a multi-relational model that explicitly leverages such a structure to derive word embeddings from definitions.
An empirical analysis demonstrates that the framework can help imposing the desired structural constraints.
Experiments reveal the superiority of the Hyperbolic word embeddings over the Euclidean counterparts.
arXiv Detail & Related papers (2023-05-12T08:16:06Z) - Evaluating the Robustness of Interpretability Methods through
Explanation Invariance and Equivariance [72.50214227616728]
Interpretability methods are valuable only if their explanations faithfully describe the explained model.
We consider neural networks whose predictions are invariant under a specific symmetry group.
arXiv Detail & Related papers (2023-04-13T17:59:03Z) - Testing Pre-trained Language Models' Understanding of Distributivity via
Causal Mediation Analysis [13.07356367140208]
We introduce DistNLI, a new diagnostic dataset for natural language inference.
We find that the extent of models' understanding is associated with model size and vocabulary size.
arXiv Detail & Related papers (2022-09-11T00:33:28Z) - Plurality and Quantification in Graph Representation of Meaning [4.82512586077023]
Our graph language covers the essentials of natural language semantics using only monadic second-order variables.
We present a unification-based mechanism for constructing semantic graphs at a simple syntax-semantics interface.
The present graph formalism is applied to linguistic issues in distributive predication, cross-categorial conjunction, and scope permutation of quantificational expressions.
arXiv Detail & Related papers (2021-12-13T07:04:41Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.