Plausible Reasoning about EL-Ontologies using Concept Interpolation
- URL: http://arxiv.org/abs/2006.14437v1
- Date: Thu, 25 Jun 2020 14:19:41 GMT
- Title: Plausible Reasoning about EL-Ontologies using Concept Interpolation
- Authors: Yazm\'in Ib\'a\~nez-Garc\'ia, V\'ictor Guti\'errez-Basulto, Steven
Schockaert
- Abstract summary: We propose an inductive mechanism which is based on a clear model-theoretic semantics, and can thus be tightly integrated with standard deductive reasoning.
We focus on inference, a powerful commonsense reasoning mechanism which is closely related to cognitive models of category-based induction.
- Score: 27.314325986689752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Description logics (DLs) are standard knowledge representation languages for
modelling ontologies, i.e. knowledge about concepts and the relations between
them. Unfortunately, DL ontologies are difficult to learn from data and
time-consuming to encode manually. As a result, ontologies for broad domains
are almost inevitably incomplete. In recent years, several data-driven
approaches have been proposed for automatically extending such ontologies. One
family of methods rely on characterizations of concepts that are derived from
text descriptions. While such characterizations do not capture ontological
knowledge directly, they encode information about the similarity between
different concepts, which can be exploited for filling in the gaps in existing
ontologies. To this end, several inductive inference mechanisms have already
been proposed, but these have been defined and used in a heuristic fashion. In
this paper, we instead propose an inductive inference mechanism which is based
on a clear model-theoretic semantics, and can thus be tightly integrated with
standard deductive reasoning. We particularly focus on interpolation, a
powerful commonsense reasoning mechanism which is closely related to cognitive
models of category-based induction. Apart from the formalization of the
underlying semantics, as our main technical contribution we provide
computational complexity bounds for reasoning in EL with this interpolation
mechanism.
Related papers
- Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - Dual Box Embeddings for the Description Logic EL++ [16.70961576041243]
Similar to Knowledge Graphs (KGs), Knowledge Graphs are often incomplete, and maintaining and constructing them has proved challenging.
Similar to KGs, a promising approach is to learn embeddings in a latent vector space, while additionally ensuring they adhere to the semantics of the underlying DL.
We propose a novel ontology embedding method named Box$2$EL for the DL EL++, which represents both concepts and roles as boxes.
arXiv Detail & Related papers (2023-01-26T14:13:37Z) - Non-Axiomatic Term Logic: A Computational Theory of Cognitive Symbolic
Reasoning [3.344997561878685]
Non-Axiomatic Term Logic (NATL) is a theoretical computational framework of humanlike symbolic reasoning in artificial intelligence.
NATL unites a discrete syntactic system inspired from Aristotle's term logic and a continuous semantic system based on the modern idea of distributed representations.
arXiv Detail & Related papers (2022-10-12T15:31:35Z) - Entropy-based Logic Explanations of Neural Networks [24.43410365335306]
We propose an end-to-end differentiable approach for extracting logic explanations from neural networks.
The method relies on an entropy-based criterion which automatically identifies the most relevant concepts.
We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy.
arXiv Detail & Related papers (2021-06-12T15:50:47Z) - A Description Logic for Analogical Reasoning [28.259681405091666]
We present a mechanism to infer plausible missing knowledge, which relies on reasoning by analogy.
This is the first paper that studies analog reasoning within the setting of description logic.
arXiv Detail & Related papers (2021-05-10T19:06:07Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Expressiveness and machine processability of Knowledge Organization
Systems (KOS): An analysis of concepts and relations [0.0]
The potential of both the expressiveness and machine processability of each Knowledge Organization System is extensively regulated by its structural rules.
Ontologies explicitly define diverse types of relations, and are by their nature machine-processable.
arXiv Detail & Related papers (2020-03-11T12:35:52Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.