Semantics, Ontology and Explanation
- URL: http://arxiv.org/abs/2304.11124v1
- Date: Fri, 21 Apr 2023 16:54:34 GMT
- Title: Semantics, Ontology and Explanation
- Authors: Giancarlo Guizzardi, Nicola Guarino
- Abstract summary: We discuss the relation between ontological unpacking and other forms of explanation in philosophy and science.
We also discuss the relation between ontological unpacking and other forms of explanation in the area of Artificial Intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The terms 'semantics' and 'ontology' are increasingly appearing together with
'explanation', not only in the scientific literature, but also in
organizational communication. However, all of these terms are also being
significantly overloaded. In this paper, we discuss their strong relation under
particular interpretations. Specifically, we discuss a notion of explanation
termed ontological unpacking, which aims at explaining symbolic domain
descriptions (conceptual models, knowledge graphs, logical specifications) by
revealing their ontological commitment in terms of their assumed truthmakers,
i.e., the entities in one's ontology that make the propositions in those
descriptions true. To illustrate this idea, we employ an ontological theory of
relations to explain (by revealing the hidden semantics of) a very simple
symbolic model encoded in the standard modeling language UML. We also discuss
the essential role played by ontology-driven conceptual models (resulting from
this form of explanation processes) in properly supporting semantic
interoperability tasks. Finally, we discuss the relation between ontological
unpacking and other forms of explanation in philosophy and science, as well as
in the area of Artificial Intelligence.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Ontology for Conceptual Modeling: Reality of What Thinging Machines Talk
About, e.g., Information [0.0]
This paper develops an interdisciplinary research approach to develop a diagrammatic-based on the foundation for conceptual modeling (CM)
It is an endeavor to escape an offshore procurement of ontology from philosophy and implant it in CM.
The results seem to indicate a promising approach to define information and understand its nature.
arXiv Detail & Related papers (2023-08-16T03:21:27Z) - On the Computation of Meaning, Language Models and Incomprehensible Horrors [0.0]
We integrate foundational theories of meaning with a mathematical formalism of artificial general intelligence (AGI)
Our findings shed light on the relationship between meaning and intelligence, and how we can build machines that comprehend and intend meaning.
arXiv Detail & Related papers (2023-04-25T09:41:00Z) - A Theoretical Framework for AI Models Explainability with Application in
Biomedicine [3.5742391373143474]
We propose a novel definition of explanation that is a synthesis of what can be found in the literature.
We fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's inner workings and decision-making process) and plausibility (i.e., how much the explanation looks convincing to the user)
arXiv Detail & Related papers (2022-12-29T20:05:26Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Scientific Explanation and Natural Language: A Unified
Epistemological-Linguistic Perspective for Explainable AI [2.7920304852537536]
This paper focuses on the scientific domain, aiming to bridge the gap between theory and practice on the notion of a scientific explanation.
Through a mixture of quantitative and qualitative methodologies, the presented study allows deriving the following main conclusions.
arXiv Detail & Related papers (2022-05-03T22:31:42Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Thinking About Causation: A Causal Language with Epistemic Operators [58.720142291102135]
We extend the notion of a causal model with a representation of the state of an agent.
On the side of the object language, we add operators to express knowledge and the act of observing new information.
We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.
arXiv Detail & Related papers (2020-10-30T12:16:45Z) - Knowledge Patterns [19.57676317580847]
This paper describes a new technique, called "knowledge patterns", for helping construct axiom-rich, formal Ontology.
Knowledge patterns provide an important insight into the structure of a formal Ontology.
We describe the technique and an application built using them, and then critique their strengths and weaknesses.
arXiv Detail & Related papers (2020-05-08T22:33:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.