On the Complexity of Learning Description Logic Ontologies
- URL: http://arxiv.org/abs/2103.13694v1
- Date: Thu, 25 Mar 2021 09:18:12 GMT
- Title: On the Complexity of Learning Description Logic Ontologies
- Authors: Ana Ozaki
- Abstract summary: Ontologies are a popular way of representing domain knowledge, in particular, knowledge in domains related to life sciences.
We provide a formal specification of the exact and the probably correct learning models from learning theory.
- Score: 14.650545418986058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ontologies are a popular way of representing domain knowledge, in particular,
knowledge in domains related to life sciences. (Semi-)automating the process of
building an ontology has attracted researchers from different communities into
a field called "Ontology Learning". We provide a formal specification of the
exact and the probably approximately correct learning models from computational
learning theory. Then, we recall from the literature complexity results for
learning lightweight description logic (DL) ontologies in these models.
Finally, we highlight other approaches proposed in the literature for learning
DL ontologies.
Related papers
- Neurosymbolic Graph Enrichment for Grounded World Models [47.92947508449361]
We present a novel approach to enhance and exploit LLM reactive capability to address complex problems.
We create a multimodal, knowledge-augmented formal representation of meaning that combines the strengths of large language models with structured semantic representations.
By bridging the gap between unstructured language models and formal semantic structures, our method opens new avenues for tackling intricate problems in natural language understanding and reasoning.
arXiv Detail & Related papers (2024-11-19T17:23:55Z) - Ontology Embedding: A Survey of Methods, Applications and Resources [54.3453925775069]
Ontologies are widely used for representing domain knowledge and meta data.
One straightforward solution is to integrate statistical analysis and machine learning.
Numerous papers have been published on embedding, but a lack of systematic reviews hinders researchers from gaining a comprehensive understanding of this field.
arXiv Detail & Related papers (2024-06-16T14:49:19Z) - A Short Review for Ontology Learning: Stride to Large Language Models Trend [1.7142222335232333]
Ontologies provide formal representation of knowledge shared within Web applications.
New trend of approaches is relying on large language models (LLMs) to enhance ontology learning.
arXiv Detail & Related papers (2024-04-23T12:47:31Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Position: Topological Deep Learning is the New Frontier for Relational Learning [51.05869778335334]
Topological deep learning (TDL) is a rapidly evolving field that uses topological features to understand and design deep learning models.
This paper posits that TDL is the new frontier for relational learning.
arXiv Detail & Related papers (2024-02-14T00:35:10Z) - Reorganizing Educational Institutional Domain using Faceted Ontological
Principles [0.0]
This work is to find out how different library classification systems and linguistic techniques arrange a particular domain of interest.
We use knowledge representation and languages for a specific domain specific ontology.
This construction would help not only in problem solving, but it would demonstrate the ease with which complex queries can be handled.
arXiv Detail & Related papers (2023-06-17T09:06:07Z) - MRKL Systems: A modular, neuro-symbolic architecture that combines large
language models, external knowledge sources and discrete reasoning [50.40151403246205]
Huge language models (LMs) have ushered in a new era for AI, serving as a gateway to natural-language-based knowledge tasks.
We define a flexible architecture with multiple neural models, complemented by discrete knowledge and reasoning modules.
We describe this neuro-symbolic architecture, dubbed the Modular Reasoning, Knowledge and Language (MRKL) system.
arXiv Detail & Related papers (2022-05-01T11:01:28Z) - Learning Description Logic Ontologies. Five Approaches. Where Do They
Stand? [14.650545418986058]
We highlight machine learning and data mining approaches that have been proposed for the creation of description logic (DL)
These are based on association rule mining, formal concept analysis, inductive logic programming, computational learning theory, and neural networks.
arXiv Detail & Related papers (2021-04-02T18:36:45Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Knowledge Patterns [19.57676317580847]
This paper describes a new technique, called "knowledge patterns", for helping construct axiom-rich, formal Ontology.
Knowledge patterns provide an important insight into the structure of a formal Ontology.
We describe the technique and an application built using them, and then critique their strengths and weaknesses.
arXiv Detail & Related papers (2020-05-08T22:33:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.