Assessing the Capability of Large Language Models for Domain-Specific Ontology Generation
- URL: http://arxiv.org/abs/2504.17402v1
- Date: Thu, 24 Apr 2025 09:47:14 GMT
- Title: Assessing the Capability of Large Language Models for Domain-Specific Ontology Generation
- Authors: Anna Sofia Lippolis, Mohammad Javad Saeedizade, Robin Keskisarkka, Aldo Gangemi, Eva Blomqvist, Andrea Giovanni Nuzzolese,
- Abstract summary: Large Language Models (LLMs) have shown significant potential for ontology engineering.<n>We investigate the generalizability of two state-of-the-art LLMs, DeepSeek and o1-preview, by generating from a set of competency questions.<n>Our findings show that the performance of the experiments is remarkably consistent across all domains.
- Score: 1.099532646524593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have shown significant potential for ontology engineering. However, it is still unclear to what extent they are applicable to the task of domain-specific ontology generation. In this study, we explore the application of LLMs for automated ontology generation and evaluate their performance across different domains. Specifically, we investigate the generalizability of two state-of-the-art LLMs, DeepSeek and o1-preview, both equipped with reasoning capabilities, by generating ontologies from a set of competency questions (CQs) and related user stories. Our experimental setup comprises six distinct domains carried out in existing ontology engineering projects and a total of 95 curated CQs designed to test the models' reasoning for ontology engineering. Our findings show that with both LLMs, the performance of the experiments is remarkably consistent across all domains, indicating that these methods are capable of generalizing ontology generation tasks irrespective of the domain. These results highlight the potential of LLM-based approaches in achieving scalable and domain-agnostic ontology construction and lay the groundwork for further research into enhancing automated reasoning and knowledge representation techniques.
Related papers
- A Call for New Recipes to Enhance Spatial Reasoning in MLLMs [85.67171333213301]
Multimodal Large Language Models (MLLMs) have demonstrated impressive performance in general vision-language tasks.
Recent studies have exposed critical limitations in their spatial reasoning capabilities.
This deficiency in spatial reasoning significantly constrains MLLMs' ability to interact effectively with the physical world.
arXiv Detail & Related papers (2025-04-21T11:48:39Z) - A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems [93.8285345915925]
Reasoning is a fundamental cognitive process that enables logical inference, problem-solving, and decision-making.<n>With the rapid advancement of large language models (LLMs), reasoning has emerged as a key capability that distinguishes advanced AI systems.<n>We categorize existing methods along two dimensions: (1) Regimes, which define the stage at which reasoning is achieved; and (2) Architectures, which determine the components involved in the reasoning process.
arXiv Detail & Related papers (2025-04-12T01:27:49Z) - Ontology Generation using Large Language Models [1.0037949839020768]
We present and evaluate two new prompting techniques for automated ontology development: Memoryless CQbyCQ and Ontogenia.<n>Trials show that OpenAI o1-preview with Ontogenia produces of sufficient quality to meet the requirements of engineers.
arXiv Detail & Related papers (2025-03-07T13:03:28Z) - LLMs4Life: Large Language Models for Ontology Learning in Life Sciences [10.658387847149195]
Existing Large Language Models (LLMs) struggle to generate with multiple hierarchical levels, rich interconnections, and comprehensive coverage.<n>We extend the NeOn-GPT for ontology learning using LLMs with advanced prompt engineering techniques.<n>Our evaluation shows the viability of LLMs for learning in specialized domains, providing solutions to longstanding limitations in model performance and scalability.
arXiv Detail & Related papers (2024-12-02T23:31:52Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - A RAG Approach for Generating Competency Questions in Ontology Engineering [1.0044270899550196]
With the emergence of Large Language Models (LLMs), there arises the possibility to automate and enhance this process.<n>We present a retrieval-augmented generation (RAG) approach that uses LLMs for the automatic generation of CQs.<n>We conduct experiments using GPT-4 on two domain engineering tasks and compare results against ground-truth CQs constructed by domain experts.
arXiv Detail & Related papers (2024-09-13T13:34:32Z) - Ontology Embedding: A Survey of Methods, Applications and Resources [54.3453925775069]
Onologies are widely used for representing domain knowledge and meta data.
logical reasoning that can directly support are quite limited in learning, approximation and prediction.
One straightforward solution is to integrate statistical analysis and machine learning.
arXiv Detail & Related papers (2024-06-16T14:49:19Z) - On the Use of Large Language Models to Generate Capability Ontologies [43.06143768014157]
Large Language Models (LLMs) have shown that they can generate machine-interpretable models from natural language text input.
This paper investigates how LLMs can be used to create capability.
arXiv Detail & Related papers (2024-04-26T16:41:00Z) - Large language models as oracles for instantiating ontologies with domain-specific knowledge [0.0]
We propose a domain-independent approach to automatically instantiate with domain-specific knowledge.
Our method queries the multiple times, and generates instances for classes and properties from its replies.
Experimentally, our method achieves that is up to five times higher than the state-of-the-art.
arXiv Detail & Related papers (2024-04-05T14:04:07Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey [100.24095818099522]
Large language models (LLMs) have significantly advanced the field of natural language processing (NLP)
They provide a highly useful, task-agnostic foundation for a wide range of applications.
However, directly applying LLMs to solve sophisticated problems in specific domains meets many hurdles.
arXiv Detail & Related papers (2023-05-30T03:00:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.