Pragmatic constraints and pronoun reference disambiguation: the possible
and the impossible
- URL: http://arxiv.org/abs/2204.01166v2
- Date: Tue, 5 Apr 2022 03:37:27 GMT
- Title: Pragmatic constraints and pronoun reference disambiguation: the possible
and the impossible
- Authors: Ernest Davis
- Abstract summary: In AI and linguistics research, this has mostly been studied in cases where the referent is explicitly stated in the preceding text nearby.
Pronouns in natural text often refer to entities, collections, or events that are only implicitly mentioned previously.
It is occasionally possible to have a pronoun that is far separated from its referent in a text.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Pronoun disambiguation in understanding text and discourse often requires the
application of both general pragmatic knowledge and context-specific
information. In AI and linguistics research, this has mostly been studied in
cases where the referent is explicitly stated in the preceding text nearby.
However, pronouns in natural text often refer to entities, collections, or
events that are only implicitly mentioned previously; in those cases the need
to use pragmatic knowledge to disambiguate becomes much more acute and the
characterization of the knowledge becomes much more difficult. Extended
literary texts at times employ both extremely complex patterns of reference and
extremely rich and subtle forms of knowledge. Indeed, it is occasionally
possible to have a pronoun that is far separated from its referent in a text.
In the opposite direction, pronoun use is affected by considerations of focus
of attention and by formal constraints such as a preference for parallel
syntactic structures; these can be so strong that no pragmatic knowledge
suffices to overrule them.
Related papers
- Situated Ground Truths: Enhancing Bias-Aware AI by Situating Data Labels with SituAnnotate [0.1843404256219181]
SituAnnotate is a novel ontology-based approach to structured and context-aware data annotation.
It aims to anchor the ground truth data employed in training AI systems within the contextual and culturally-bound situations.
As a method to create, query, and compare label-based datasets, SituAnnotate empowers downstream AI systems to undergo training with explicit consideration of context and cultural bias.
arXiv Detail & Related papers (2024-06-10T09:33:13Z) - Quantifying the redundancy between prosody and text [67.07817268372743]
We use large language models to estimate how much information is redundant between prosody and the words themselves.
We find a high degree of redundancy between the information carried by the words and prosodic information across several prosodic features.
Still, we observe that prosodic features can not be fully predicted from text, suggesting that prosody carries information above and beyond the words.
arXiv Detail & Related papers (2023-11-28T21:15:24Z) - Unsupervised Mapping of Arguments of Deverbal Nouns to Their
Corresponding Verbal Labels [52.940886615390106]
Deverbal nouns are verbs commonly used in written English texts to describe events or actions, as well as their arguments.
The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation.
We propose to adopt a more syntactic approach, which maps the arguments of deverbal nouns to the corresponding verbal construction.
arXiv Detail & Related papers (2023-06-24T10:07:01Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - A Linguistic Investigation of Machine Learning based Contradiction
Detection Models: An Empirical Analysis and Future Perspectives [0.34998703934432673]
We analyze two Natural Language Inference data sets with respect to their linguistic features.
The goal is to identify those syntactic and semantic properties that are particularly hard to comprehend for a machine learning model.
arXiv Detail & Related papers (2022-10-19T10:06:03Z) - Target Languages (vs. Inductive Biases) for Learning to Act and Plan [13.820550902006078]
I articulate a different learning approach where representations do not emerge from biases in a neural architecture but are learned over a given target language with a known semantics.
The goals of the paper and talk are to make these ideas explicit, to place them in a broader context where the design of the target language is crucial, and to illustrate them in the context of learning to act and plan.
arXiv Detail & Related papers (2021-09-15T10:24:13Z) - Exophoric Pronoun Resolution in Dialogues with Topic Regularization [84.23706744602217]
Resolving pronouns to their referents has long been studied as a fundamental natural language understanding problem.
Previous works on pronoun coreference resolution (PCR) mostly focus on resolving pronouns to mentions in text while ignoring the exophoric scenario.
We propose to jointly leverage the local context and global topics of dialogues to solve the out-of-textPCR problem.
arXiv Detail & Related papers (2021-09-10T11:08:31Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z) - XTE: Explainable Text Entailment [8.036150169408241]
Entailment is the task of determining whether a piece of text logically follows from another piece of text.
XTE - Explainable Text Entailment - is a novel composite approach for recognizing text entailment.
arXiv Detail & Related papers (2020-09-25T20:49:07Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.