Learning Description Logic Ontologies. Five Approaches. Where Do They
Stand?
- URL: http://arxiv.org/abs/2104.01193v1
- Date: Fri, 2 Apr 2021 18:36:45 GMT
- Title: Learning Description Logic Ontologies. Five Approaches. Where Do They
Stand?
- Authors: Ana Ozaki
- Abstract summary: We highlight machine learning and data mining approaches that have been proposed for the creation of description logic (DL)
These are based on association rule mining, formal concept analysis, inductive logic programming, computational learning theory, and neural networks.
- Score: 14.650545418986058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quest for acquiring a formal representation of the knowledge of a domain
of interest has attracted researchers with various backgrounds into a diverse
field called ontology learning. We highlight classical machine learning and
data mining approaches that have been proposed for (semi-)automating the
creation of description logic (DL) ontologies. These are based on association
rule mining, formal concept analysis, inductive logic programming,
computational learning theory, and neural networks. We provide an overview of
each approach and how it has been adapted for dealing with DL ontologies.
Finally, we discuss the benefits and limitations of each of them for learning
DL ontologies.
Related papers
- Ontology Embedding: A Survey of Methods, Applications and Resources [54.3453925775069]
Ontologies are widely used for representing domain knowledge and meta data.
One straightforward solution is to integrate statistical analysis and machine learning.
Numerous papers have been published on embedding, but a lack of systematic reviews hinders researchers from gaining a comprehensive understanding of this field.
arXiv Detail & Related papers (2024-06-16T14:49:19Z) - A Short Review for Ontology Learning: Stride to Large Language Models Trend [1.7142222335232333]
Ontologies provide formal representation of knowledge shared within Web applications.
New trend of approaches is relying on large language models (LLMs) to enhance ontology learning.
arXiv Detail & Related papers (2024-04-23T12:47:31Z) - Position: Topological Deep Learning is the New Frontier for Relational Learning [51.05869778335334]
Topological deep learning (TDL) is a rapidly evolving field that uses topological features to understand and design deep learning models.
This paper posits that TDL is the new frontier for relational learning.
arXiv Detail & Related papers (2024-02-14T00:35:10Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Lattice-preserving $\mathcal{ALC}$ ontology embeddings with saturation [50.05281461410368]
An order-preserving embedding method is proposed to generate embeddings of OWL representations.
We show that our method outperforms state-the-art theory-of-the-art embedding methods in several knowledge base completion tasks.
arXiv Detail & Related papers (2023-05-11T22:27:51Z) - Dual Box Embeddings for the Description Logic EL++ [16.70961576041243]
Similar to Knowledge Graphs (KGs), Knowledge Graphs are often incomplete, and maintaining and constructing them has proved challenging.
Similar to KGs, a promising approach is to learn embeddings in a latent vector space, while additionally ensuring they adhere to the semantics of the underlying DL.
We propose a novel ontology embedding method named Box$2$EL for the DL EL++, which represents both concepts and roles as boxes.
arXiv Detail & Related papers (2023-01-26T14:13:37Z) - Bayesian Learning for Neural Networks: an algorithmic survey [95.42181254494287]
This self-contained survey engages and introduces readers to the principles and algorithms of Bayesian Learning for Neural Networks.
It provides an introduction to the topic from an accessible, practical-algorithmic perspective.
arXiv Detail & Related papers (2022-11-21T21:36:58Z) - On the Complexity of Learning Description Logic Ontologies [14.650545418986058]
Ontologies are a popular way of representing domain knowledge, in particular, knowledge in domains related to life sciences.
We provide a formal specification of the exact and the probably correct learning models from learning theory.
arXiv Detail & Related papers (2021-03-25T09:18:12Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Plausible Reasoning about EL-Ontologies using Concept Interpolation [27.314325986689752]
We propose an inductive mechanism which is based on a clear model-theoretic semantics, and can thus be tightly integrated with standard deductive reasoning.
We focus on inference, a powerful commonsense reasoning mechanism which is closely related to cognitive models of category-based induction.
arXiv Detail & Related papers (2020-06-25T14:19:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.