A Kind Introduction to Lexical and Grammatical Aspect, with a Survey of
Computational Approaches
- URL: http://arxiv.org/abs/2208.09012v1
- Date: Thu, 18 Aug 2022 18:22:42 GMT
- Title: A Kind Introduction to Lexical and Grammatical Aspect, with a Survey of
Computational Approaches
- Authors: Annemarie Friedrich, Nianwen Xue, Alexis Palmer
- Abstract summary: Aspectual meaning refers to how the internal temporal structure of situations is presented.
This survey gives an overview of computational approaches to modeling lexical and grammatical aspect.
- Score: 7.310850880167243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspectual meaning refers to how the internal temporal structure of situations
is presented. This includes whether a situation is described as a state or as
an event, whether the situation is finished or ongoing, and whether it is
viewed as a whole or with a focus on a particular phase. This survey gives an
overview of computational approaches to modeling lexical and grammatical aspect
along with intuitive explanations of the necessary linguistic concepts and
terminology. In particular, we describe the concepts of stativity, telicity,
habituality, perfective and imperfective, as well as influential inventories of
eventuality and situation types. We argue that because aspect is a crucial
component of semantics, especially when it comes to reporting the temporal
structure of situations in a precise way, future NLP approaches need to be able
to handle and evaluate it systematically in order to achieve human-level
language understanding.
Related papers
- LLMs as Function Approximators: Terminology, Taxonomy, and Questions for Evaluation [18.2932386988379]
This paper argues that the loss of clarity on what these models model leads to metaphors like "artificial general intelligences"
The proposal is to see their generality, and their potential value, in their ability to approximate specialist function, based on a natural language specification.
arXiv Detail & Related papers (2024-07-18T17:49:56Z) - An Overview Of Temporal Commonsense Reasoning and Acquisition [20.108317515225504]
Temporal commonsense reasoning refers to the ability to understand the typical temporal context of phrases, actions, and events.
Recent research on the performance of large language models suggests that they often take shortcuts in their reasoning and fall prey to simple linguistic traps.
arXiv Detail & Related papers (2023-07-28T01:30:15Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - A Linguistic Investigation of Machine Learning based Contradiction
Detection Models: An Empirical Analysis and Future Perspectives [0.34998703934432673]
We analyze two Natural Language Inference data sets with respect to their linguistic features.
The goal is to identify those syntactic and semantic properties that are particularly hard to comprehend for a machine learning model.
arXiv Detail & Related papers (2022-10-19T10:06:03Z) - Thirty years of Epistemic Specifications [8.339560855135575]
We extend disjunctive logic programs under the stable model semantics with modal constructs called subjective literals.
Using subjective literals, it is possible to check whether a regular literal is true in every or some stable models of the program.
Several attempts for capturing the intuitions underlying the language by means of a formal semantics were given.
arXiv Detail & Related papers (2021-08-17T15:03:10Z) - Did the Cat Drink the Coffee? Challenging Transformers with Generalized
Event Knowledge [59.22170796793179]
Transformers Language Models (TLMs) were tested on a benchmark for the textitdynamic estimation of thematic fit
Our results show that TLMs can reach performances that are comparable to those achieved by SDM.
However, additional analysis consistently suggests that TLMs do not capture important aspects of event knowledge.
arXiv Detail & Related papers (2021-07-22T20:52:26Z) - Modelling Compositionality and Structure Dependence in Natural Language [0.12183405753834563]
Drawing on linguistics and set theory, a formalisation of these ideas is presented in the first half of this thesis.
We see how cognitive systems that process language need to have certain functional constraints.
Using the advances of word embedding techniques, a model of relational learning is simulated.
arXiv Detail & Related papers (2020-11-22T17:28:50Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.