Towards Complex Ontology Alignment using Large Language Models
- URL: http://arxiv.org/abs/2404.10329v2
- Date: Mon, 22 Jul 2024 20:07:33 GMT
- Title: Towards Complex Ontology Alignment using Large Language Models
- Authors: Reihaneh Amini, Sanaz Saki Norouzi, Pascal Hitzler, Reza Amini,
- Abstract summary: Ontology alignment is a critical process in Web for detecting relationships between different labels and content.
Recent advancements in Large Language Models (LLMs) presents new opportunities for enhancing engineering practices.
This paper investigates the application of LLM technologies to tackle the complex alignment challenge.
- Score: 1.3218260503808055
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Ontology alignment, a critical process in the Semantic Web for detecting relationships between different ontologies, has traditionally focused on identifying so-called "simple" 1-to-1 relationships through class labels and properties comparison. The more practically useful exploration of more complex alignments remains a hard problem to automate, and as such is largely underexplored, i.e. in application practice it is usually done manually by ontology and domain experts. Recently, the surge in Natural Language Processing (NLP) capabilities, driven by advancements in Large Language Models (LLMs), presents new opportunities for enhancing ontology engineering practices, including ontology alignment tasks. This paper investigates the application of LLM technologies to tackle the complex ontology alignment challenge. Leveraging a prompt-based approach and integrating rich ontology content so-called modules our work constitutes a significant advance towards automating the complex alignment task.
Related papers
- LLMs4Life: Large Language Models for Ontology Learning in Life Sciences [10.658387847149195]
Existing Large Language Models (LLMs) struggle to generate with multiple hierarchical levels, rich interconnections, and comprehensive coverage.
We extend the NeOn-GPT for ontology learning using LLMs with advanced prompt engineering techniques.
Our evaluation shows the viability of LLMs for learning in specialized domains, providing solutions to longstanding limitations in model performance and scalability.
arXiv Detail & Related papers (2024-12-02T23:31:52Z) - End-to-End Ontology Learning with Large Language Models [11.755755139228219]
Large language models (LLMs) have been applied to solve various subtasks of ontology learning.
We address this gap by OLLM, a general and scalable method for building the taxonomic backbone of an ontology from scratch.
In contrast to standard metrics, our metrics use deep learning techniques to define more robust structural distance measures between graphs.
Our model can be effectively adapted to new domains, like arXiv, needing only a small number of training examples.
arXiv Detail & Related papers (2024-10-31T02:52:39Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - Multi-step Inference over Unstructured Data [2.169874047093392]
High-stakes decision-making tasks in fields such as medical, legal and finance require a level of precision, comprehensiveness, and logical consistency.
We have developed a neuro-symbolic AI platform to tackle these problems.
The platform integrates fine-tuned LLMs for knowledge extraction and alignment with a robust symbolic reasoning engine.
arXiv Detail & Related papers (2024-06-26T00:00:45Z) - Ontology Embedding: A Survey of Methods, Applications and Resources [54.3453925775069]
Onologies are widely used for representing domain knowledge and meta data.
logical reasoning that can directly support are quite limited in learning, approximation and prediction.
One straightforward solution is to integrate statistical analysis and machine learning.
arXiv Detail & Related papers (2024-06-16T14:49:19Z) - Scalable Language Model with Generalized Continual Learning [58.700439919096155]
The Joint Adaptive Re-ization (JARe) is integrated with Dynamic Task-related Knowledge Retrieval (DTKR) to enable adaptive adjustment of language models based on specific downstream tasks.
Our method demonstrates state-of-the-art performance on diverse backbones and benchmarks, achieving effective continual learning in both full-set and few-shot scenarios with minimal forgetting.
arXiv Detail & Related papers (2024-04-11T04:22:15Z) - Large language models as oracles for instantiating ontologies with domain-specific knowledge [0.0]
We propose a domain-independent approach to automatically instantiate with domain-specific knowledge.
Our method queries the multiple times, and generates instances for classes and properties from its replies.
Experimentally, our method achieves that is up to five times higher than the state-of-the-art.
arXiv Detail & Related papers (2024-04-05T14:04:07Z) - Noise Contrastive Estimation-based Matching Framework for Low-Resource
Security Attack Pattern Recognition [49.536368818512116]
Tactics, Techniques and Procedures (TTPs) represent sophisticated attack patterns in the cybersecurity domain.
We formulate the problem in a different learning paradigm, where the assignment of a text to a TTP label is decided by the direct semantic similarity between the two.
We propose a neural matching architecture with an effective sampling-based learn-to-compare mechanism.
arXiv Detail & Related papers (2024-01-18T19:02:00Z) - Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey [100.24095818099522]
Large language models (LLMs) have significantly advanced the field of natural language processing (NLP)
They provide a highly useful, task-agnostic foundation for a wide range of applications.
However, directly applying LLMs to solve sophisticated problems in specific domains meets many hurdles.
arXiv Detail & Related papers (2023-05-30T03:00:30Z) - Guiding the PLMs with Semantic Anchors as Intermediate Supervision:
Towards Interpretable Semantic Parsing [57.11806632758607]
We propose to incorporate the current pretrained language models with a hierarchical decoder network.
By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks.
We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines.
arXiv Detail & Related papers (2022-10-04T07:27:29Z) - Dividing the Ontology Alignment Task with Semantic Embeddings and
Logic-based Modules [15.904000789557486]
This paper presents an approach that combines a embedding model and logic-based modules to accurately divide an input matching task into smaller and more tractable tasks.
The results are encouraging and suggest that the proposed method is adequate in practice and can be integrated within the workflow of systems unable to cope with very large neural datasets.
arXiv Detail & Related papers (2020-02-25T14:44:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.