Thirty years of Epistemic Specifications
- URL: http://arxiv.org/abs/2108.07669v1
- Date: Tue, 17 Aug 2021 15:03:10 GMT
- Title: Thirty years of Epistemic Specifications
- Authors: Jorge Fandinno, Wolfgang Faber and Michael Gelfond
- Abstract summary: We extend disjunctive logic programs under the stable model semantics with modal constructs called subjective literals.
Using subjective literals, it is possible to check whether a regular literal is true in every or some stable models of the program.
Several attempts for capturing the intuitions underlying the language by means of a formal semantics were given.
- Score: 8.339560855135575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The language of epistemic specifications and epistemic logic programs extends
disjunctive logic programs under the stable model semantics with modal
constructs called subjective literals. Using subjective literals, it is
possible to check whether a regular literal is true in every or some stable
models of the program, those models, in this context also called \emph{belief
sets}, being collected in a set called world view. This allows for
representing, within the language, whether some proposition should be
understood accordingly to the open or the closed world assumption. Several
attempts for capturing the intuitions underlying the language by means of a
formal semantics were given, resulting in a multitude of proposals that makes
it difficult to understand the current state of the art. In this paper, we
provide an overview of the inception of the field and the knowledge
representation and reasoning tasks it is suitable for. We also provide a
detailed analysis of properties of proposed semantics, and an outlook of
challenges to be tackled by future research in the area. Under consideration in
Theory and Practice of Logic Programming (TPLP)
Related papers
- Toward Conceptual Modeling for Propositional Logic: Propositions as Events [0.0]
This paper reflects on applying propositional logic language to a high-level diagrammatic representation called the thinging machines (TM) model.
The ultimate research objective is a quest for a thorough semantic alignment of TM modeling and propositional logic into a single structure.
arXiv Detail & Related papers (2024-09-24T03:45:24Z) - Learning Visual-Semantic Subspace Representations for Propositional Reasoning [49.17165360280794]
We propose a novel approach for learning visual representations that conform to a specified semantic structure.
Our approach is based on a new nuclear norm-based loss.
We show that its minimum encodes the spectral geometry of the semantics in a subspace lattice.
arXiv Detail & Related papers (2024-05-25T12:51:38Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Evaluating statistical language models as pragmatic reasoners [39.72348730045737]
We evaluate the capacity of large language models to infer meanings of pragmatic utterances.
We find that LLMs can derive context-grounded, human-like distributions over the interpretations of several complex pragmatic utterances.
Results inform the inferential capacity of statistical language models, and their use in pragmatic and semantic parsing applications.
arXiv Detail & Related papers (2023-05-01T18:22:10Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - A Kind Introduction to Lexical and Grammatical Aspect, with a Survey of
Computational Approaches [7.310850880167243]
Aspectual meaning refers to how the internal temporal structure of situations is presented.
This survey gives an overview of computational approaches to modeling lexical and grammatical aspect.
arXiv Detail & Related papers (2022-08-18T18:22:42Z) - Provable Limitations of Acquiring Meaning from Ungrounded Form: What
will Future Language Models Understand? [87.20342701232869]
We investigate the abilities of ungrounded systems to acquire meaning.
We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence.
We find that assertions enable semantic emulation if all expressions in the language are referentially transparent.
However, if the language uses non-transparent patterns like variable binding, we show that emulation can become an uncomputable problem.
arXiv Detail & Related papers (2021-04-22T01:00:17Z) - Modelling Compositionality and Structure Dependence in Natural Language [0.12183405753834563]
Drawing on linguistics and set theory, a formalisation of these ideas is presented in the first half of this thesis.
We see how cognitive systems that process language need to have certain functional constraints.
Using the advances of word embedding techniques, a model of relational learning is simulated.
arXiv Detail & Related papers (2020-11-22T17:28:50Z) - Exploring Probabilistic Soft Logic as a framework for integrating
top-down and bottom-up processing of language in a task context [0.6091702876917279]
The architecture integrates existing NLP components to produce candidate analyses on eight levels of linguistic modeling.
The architecture builds on Universal Dependencies (UD) as its representation formalism on the form level and on Abstract Meaning Representations (AMRs) to represent semantic analyses of learner answers.
arXiv Detail & Related papers (2020-04-15T11:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.