On the Use of Large Language Models to Generate Capability Ontologies
- URL: http://arxiv.org/abs/2404.17524v4
- Date: Fri, 18 Oct 2024 08:03:02 GMT
- Title: On the Use of Large Language Models to Generate Capability Ontologies
- Authors: Luis Miguel Vieira da Silva, Aljosha Köcher, Felix Gehlhoff, Alexander Fay,
- Abstract summary: Large Language Models (LLMs) have shown that they can generate machine-interpretable models from natural language text input.
This paper investigates how LLMs can be used to create capability.
- Score: 43.06143768014157
- License:
- Abstract: Capability ontologies are increasingly used to model functionalities of systems or machines. The creation of such ontological models with all properties and constraints of capabilities is very complex and can only be done by ontology experts. However, Large Language Models (LLMs) have shown that they can generate machine-interpretable models from natural language text input and thus support engineers / ontology experts. Therefore, this paper investigates how LLMs can be used to create capability ontologies. We present a study with a series of experiments in which capabilities with varying complexities are generated using different prompting techniques and with different LLMs. Errors in the generated ontologies are recorded and compared. To analyze the quality of the generated ontologies, a semi-automated approach based on RDF syntax checking, OWL reasoning, and SHACL constraints is used. The results of this study are very promising because even for complex capabilities, the generated ontologies are almost free of errors.
Related papers
- Examining the Robustness of Large Language Models across Language Complexity [19.184633713069353]
Large language models (LLMs) analyze textual artifacts generated by students to understand and evaluate their learning.
This study examines the robustness of several LLM-based student models that detect student self-regulated learning (SRL) in math problem-solving.
arXiv Detail & Related papers (2025-01-30T20:33:59Z) - LLMs4Life: Large Language Models for Ontology Learning in Life Sciences [10.658387847149195]
Existing Large Language Models (LLMs) struggle to generate with multiple hierarchical levels, rich interconnections, and comprehensive coverage.
We extend the NeOn-GPT for ontology learning using LLMs with advanced prompt engineering techniques.
Our evaluation shows the viability of LLMs for learning in specialized domains, providing solutions to longstanding limitations in model performance and scalability.
arXiv Detail & Related papers (2024-12-02T23:31:52Z) - Toward a Method to Generate Capability Ontologies from Natural Language Descriptions [43.06143768014157]
This contribution presents an innovative method to automate capability ontology modeling using Large Language Models (LLMs)
Our approach requires only a natural language description of a capability, which is then automatically inserted into a predefined prompt.
Our method greatly reduces manual effort, as only the initial natural language description and a final human review and possible correction are necessary.
arXiv Detail & Related papers (2024-06-12T07:41:44Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Large language models as oracles for instantiating ontologies with domain-specific knowledge [0.0]
We propose a domain-independent approach to automatically instantiate with domain-specific knowledge.
Our method queries the multiple times, and generates instances for classes and properties from its replies.
Experimentally, our method achieves that is up to five times higher than the state-of-the-art.
arXiv Detail & Related papers (2024-04-05T14:04:07Z) - Large Language Models as General Pattern Machines [64.75501424160748]
We show that pre-trained large language models (LLMs) are capable of autoregressively completing complex token sequences.
Surprisingly, pattern completion proficiency can be partially retained even when the sequences are expressed using tokens randomly sampled from the vocabulary.
In this work, we investigate how these zero-shot capabilities may be applied to problems in robotics.
arXiv Detail & Related papers (2023-07-10T17:32:13Z) - Physics of Language Models: Part 1, Learning Hierarchical Language Structures [51.68385617116854]
Transformer-based language models are effective but complex, and understanding their inner workings is a significant challenge.
We introduce a family of synthetic CFGs that produce hierarchical rules, capable of generating lengthy sentences.
We demonstrate that generative models like GPT can accurately learn this CFG language and generate sentences based on it.
arXiv Detail & Related papers (2023-05-23T04:28:16Z) - Causal Abstractions of Neural Networks [9.291492712301569]
We propose a new structural analysis method grounded in a formal theory of textitcausal abstraction.
We apply this method to analyze neural models trained on Multiply Quantified Natural Language Inference (MQNLI) corpus.
arXiv Detail & Related papers (2021-06-06T01:07:43Z) - DirectDebug: Automated Testing and Debugging of Feature Models [55.41644538483948]
Variability models (e.g., feature models) are a common way for the representation of variabilities and commonalities of software artifacts.
Complex and often large-scale feature models can become faulty, i.e., do not represent the expected variability properties of the underlying software artifact.
arXiv Detail & Related papers (2021-02-11T11:22:20Z) - Reverse Engineering Configurations of Neural Text Generation Models [86.9479386959155]
The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area.
We conduct an extensive suite of diagnostic tests to observe whether modeling choices leave detectable artifacts in the text they generate.
Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone.
arXiv Detail & Related papers (2020-04-13T21:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.