Agent-OM: Leveraging LLM Agents for Ontology Matching
- URL: http://arxiv.org/abs/2312.00326v3
- Date: Mon, 29 Jul 2024 13:40:11 GMT
- Title: Agent-OM: Leveraging LLM Agents for Ontology Matching
- Authors: Zhangcheng Qiang, Weiqing Wang, Kerry Taylor,
- Abstract summary: This study introduces a novel agent-powered design paradigm for Ontology matching systems.
We propose a framework, namely Agent-OMw.r.t. Agent for Ontology Matching, consisting of two Siamese agents for matching and retrieval.
Our system can achieve results very close to the long-standing best performance on simple OM tasks.
- Score: 4.222245509121683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ontology matching (OM) enables semantic interoperability between different ontologies and resolves their conceptual heterogeneity by aligning related entities. OM systems currently have two prevailing design paradigms: conventional knowledge-based expert systems and newer machine learning-based predictive systems. While large language models (LLMs) and LLM agents have revolutionised data engineering and have been applied creatively in many domains, their potential for OM remains underexplored. This study introduces a novel agent-powered LLM-based design paradigm for OM systems. With consideration of several specific challenges in leveraging LLM agents for OM, we propose a generic framework, namely Agent-OM (w.r.t. Agent for Ontology Matching), consisting of two Siamese agents for retrieval and matching, with a set of simple OM tools. Our framework is implemented in a proof-of-concept system. Evaluations of three Ontology Alignment Evaluation Initiative (OAEI) tracks over state-of-the-art OM systems show that our system can achieve results very close to the long-standing best performance on simple OM tasks and can significantly improve the performance on complex and few-shot OM tasks.
Related papers
- OM4OV: Leveraging Ontology Matching for Ontology Versioning [0.0]
We introduce a unified OM4OV4 approach to performing version control tasks.
We experimentally validate the OM4OV pipeline and its cross-reference mechanism using three datasets.
arXiv Detail & Related papers (2024-09-30T14:00:04Z) - VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents [50.12414817737912]
Large Multimodal Models (LMMs) have ushered in a new era in artificial intelligence, merging capabilities in both language and vision to form highly capable Visual Foundation Agents.
Existing benchmarks fail to sufficiently challenge or showcase the full potential of LMMs in complex, real-world environments.
VisualAgentBench (VAB) is a pioneering benchmark specifically designed to train and evaluate LMMs as visual foundation agents.
arXiv Detail & Related papers (2024-08-12T17:44:17Z) - A Comprehensive Review of Multimodal Large Language Models: Performance and Challenges Across Different Tasks [74.52259252807191]
Multimodal Large Language Models (MLLMs) address the complexities of real-world applications far beyond the capabilities of single-modality systems.
This paper systematically sorts out the applications of MLLM in multimodal tasks such as natural language, vision, and audio.
arXiv Detail & Related papers (2024-08-02T15:14:53Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding [27.004817441034795]
Collaborative decoding between large and small language models (SLMs) presents a promising strategy to mitigate these issues.
Inspired by dual-process cognitive theory, we propose a unified framework, termed Fast and Slow Generating (FS-GEN)
Within this framework, LLMs are categorized as System 2 (slow and deliberate), while independent SLMs are designated as System 1.
arXiv Detail & Related papers (2024-06-18T05:59:28Z) - LLMs4OM: Matching Ontologies with Large Language Models [0.14999444543328289]
Ontology Matching (OM) is a critical task in knowledge integration, where aligning heterogeneous data interoperability and knowledge sharing.
We present the LLMs4OM framework, a novel approach to evaluate the effectiveness of Large Language Models (LLMs) in OM tasks.
arXiv Detail & Related papers (2024-04-16T06:55:45Z) - Large Multimodal Agents: A Survey [78.81459893884737]
Large language models (LLMs) have achieved superior performance in powering text-based AI agents.
There is an emerging research trend focused on extending these LLM-powered AI agents into the multimodal domain.
This review aims to provide valuable insights and guidelines for future research in this rapidly evolving field.
arXiv Detail & Related papers (2024-02-23T06:04:23Z) - Large Multi-Modal Models (LMMs) as Universal Foundation Models for
AI-Native Wireless Systems [57.41621687431203]
Large language models (LLMs) and foundation models have been recently touted as a game-changer for 6G systems.
This paper presents a comprehensive vision on how to design universal foundation models tailored towards the deployment of artificial intelligence (AI)-native networks.
arXiv Detail & Related papers (2024-01-30T00:21:41Z) - Omni-SMoLA: Boosting Generalist Multimodal Models with Soft Mixture of Low-rank Experts [74.40198929049959]
Large multi-modal models (LMMs) exhibit remarkable performance across numerous tasks.
generalist LMMs often suffer from performance degradation when tuned over a large collection of tasks.
We propose Omni-SMoLA, an architecture that uses the Soft MoE approach to mix many multimodal low rank experts.
arXiv Detail & Related papers (2023-12-01T23:04:27Z) - Theory of Mind for Multi-Agent Collaboration via Large Language Models [5.2767999863286645]
This study evaluates Large Language Models (LLMs)-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks.
We observed evidence of emergent collaborative behaviors and high-order Theory of Mind capabilities among LLM-based agents.
arXiv Detail & Related papers (2023-10-16T07:51:19Z) - Machine Learning-Friendly Biomedical Datasets for Equivalence and
Subsumption Ontology Matching [35.76522395991403]
We introduce five new Ontology Matching (OM) tasks involving extracted from Mondo and UMLS.
Each task includes both equivalence and subsumption matching; the quality of reference mappings is ensured by human curation.
A comprehensive evaluation framework is proposed to measure OM performance from various perspectives for both ML-based and non-ML-based OM systems.
arXiv Detail & Related papers (2022-05-06T18:52:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.