Linguists Who Use Probabilistic Models Love Them: Quantification in
Functional Distributional Semantics
- URL: http://arxiv.org/abs/2006.03002v1
- Date: Thu, 4 Jun 2020 16:48:45 GMT
- Title: Linguists Who Use Probabilistic Models Love Them: Quantification in
Functional Distributional Semantics
- Authors: Guy Emerson
- Abstract summary: I show how the previous formulation gives trivial truth values when a precise quantifier is used with vague predicates.
I propose an improved account, avoiding this problem by treating a vague predicate as a distribution over precise predicates.
I explain how the generic quantifier can be both pragmatically complex and yet computationally simpler than precise quantifiers.
- Score: 12.640283469603355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Functional Distributional Semantics provides a computationally tractable
framework for learning truth-conditional semantics from a corpus. Previous work
in this framework has provided a probabilistic version of first-order logic,
recasting quantification as Bayesian inference. In this paper, I show how the
previous formulation gives trivial truth values when a precise quantifier is
used with vague predicates. I propose an improved account, avoiding this
problem by treating a vague predicate as a distribution over precise
predicates. I connect this account to recent work in the Rational Speech Acts
framework on modelling generic quantification, and I extend this to modelling
donkey sentences. Finally, I explain how the generic quantifier can be both
pragmatically complex and yet computationally simpler than precise quantifiers.
Related papers
- Are LLMs Models of Distributional Semantics? A Case Study on Quantifiers [14.797001158310092]
We argue that distributional semantics models struggle with truth-conditional reasoning and symbolic processing.
Contrary to expectations, we find that LLMs align more closely with human judgements on exact quantifiers versus vague ones.
arXiv Detail & Related papers (2024-10-17T19:28:35Z) - The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.
Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.
The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models [22.757306452760112]
We introduce QuRe, a crowd-sourced dataset of human-annotated generalized quantifiers in Wikipedia sentences featuring percentage-equipped predicates.
We explore quantifier comprehension in language models using PRESQUE, a framework that combines natural language inference and the Rational Speech Acts framework.
arXiv Detail & Related papers (2023-11-08T13:00:06Z) - A Measure-Theoretic Characterization of Tight Language Models [105.16477132329416]
In some pathological cases, probability mass can leak'' onto the set of infinite sequences.
This paper offers a measure-theoretic treatment of language modeling.
We prove that many popular language model families are in fact tight, meaning that they will not leak in this sense.
arXiv Detail & Related papers (2022-12-20T18:17:11Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Adaptive n-ary Activation Functions for Probabilistic Boolean Logic [2.294014185517203]
We show that we can learn arbitrary logic in a single layer using an activation function of matching or greater arity.
We represent belief tables using a basis that directly associates the number of nonzero parameters to the effective arity of the belief function.
This opens optimization approaches to reduce logical complexity by inducing parameter sparsity.
arXiv Detail & Related papers (2022-03-16T22:47:53Z) - Probing as Quantifying the Inductive Bias of Pre-trained Representations [99.93552997506438]
We present a novel framework for probing where the goal is to evaluate the inductive bias of representations for a particular task.
We apply our framework to a series of token-, arc-, and sentence-level tasks.
arXiv Detail & Related papers (2021-10-15T22:01:16Z) - Rationales for Sequential Predictions [117.93025782838123]
Sequence models are a critical component of modern NLP systems, but their predictions are difficult to explain.
We consider model explanations though rationales, subsets of context that can explain individual model predictions.
We propose an efficient greedy algorithm to approximate this objective.
arXiv Detail & Related papers (2021-09-14T01:25:15Z) - A Conditional Splitting Framework for Efficient Constituency Parsing [14.548146390081778]
We introduce a generic seq2seq parsing framework that casts constituency parsing problems (syntactic and discourse parsing) into a series of conditional splitting decisions.
Our parsing model estimates the conditional probability distribution of possible splitting points in a given text span and supports efficient top-down decoding.
For discourse analysis we show that in our formulation, discourse segmentation can be framed as a special case of parsing.
arXiv Detail & Related papers (2021-06-30T00:36:34Z) - Learning Probabilistic Sentence Representations from Paraphrases [47.528336088976744]
We define probabilistic models that produce distributions for sentences.
We train our models on paraphrases and demonstrate that they naturally capture sentence specificity.
Our model captures sentential entailment and provides ways to analyze the specificity and preciseness of individual words.
arXiv Detail & Related papers (2020-05-16T21:10:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.