Truveta Mapper: A Zero-shot Ontology Alignment Framework
- URL: http://arxiv.org/abs/2301.09767v3
- Date: Tue, 22 Aug 2023 00:22:42 GMT
- Title: Truveta Mapper: A Zero-shot Ontology Alignment Framework
- Authors: Mariyam Amir, Murchana Baruah, Mahsa Eslamialishah, Sina Ehsani,
Alireza Bahramali, Sadra Naddaf-Sh, Saman Zarandioon
- Abstract summary: A new perspective is suggested for unsupervised Ontology Matching (OM) or Ontology Alignment (OA)
The proposed framework, Truveta Mapper (TM), leverages a multi-task sequence-to-sequence transformer model to perform alignment across multiple in a zero-shot, unified and end-to-end manner.
TM is pre-trained and fine-tuned only on publicly available corpus text and inner-ontologies data.
- Score: 3.5284865194805106
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, a new perspective is suggested for unsupervised Ontology
Matching (OM) or Ontology Alignment (OA) by treating it as a translation task.
Ontologies are represented as graphs, and the translation is performed from a
node in the source ontology graph to a path in the target ontology graph. The
proposed framework, Truveta Mapper (TM), leverages a multi-task
sequence-to-sequence transformer model to perform alignment across multiple
ontologies in a zero-shot, unified and end-to-end manner. Multi-tasking enables
the model to implicitly learn the relationship between different ontologies via
transfer-learning without requiring any explicit cross-ontology manually
labeled data. This also enables the formulated framework to outperform existing
solutions for both runtime latency and alignment quality. The model is
pre-trained and fine-tuned only on publicly available text corpus and
inner-ontologies data. The proposed solution outperforms state-of-the-art
approaches, Edit-Similarity, LogMap, AML, BERTMap, and the recently presented
new OM frameworks in Ontology Alignment Evaluation Initiative (OAEI22), offers
log-linear complexity, and overall makes the OM task efficient and more
straightforward without much post-processing involving mapping extension or
mapping repair. We are open sourcing our solution.
Related papers
- End-to-End Ontology Learning with Large Language Models [11.755755139228219]
Large language models (LLMs) have been applied to solve various subtasks of ontology learning.
We address this gap by OLLM, a general and scalable method for building the taxonomic backbone of an ontology from scratch.
In contrast to standard metrics, our metrics use deep learning techniques to define more robust structural distance measures between graphs.
Our model can be effectively adapted to new domains, like arXiv, needing only a small number of training examples.
arXiv Detail & Related papers (2024-10-31T02:52:39Z) - Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Multi-Scene Generalized Trajectory Global Graph Solver with Composite
Nodes for Multiple Object Tracking [61.69892497726235]
Composite Node Message Passing Network (CoNo-Link) is a framework for modeling ultra-long frames information for association.
In addition to the previous method of treating objects as nodes, the network innovatively treats object trajectories as nodes for information interaction.
Our model can learn better predictions on longer-time scales by adding composite nodes.
arXiv Detail & Related papers (2023-12-14T14:00:30Z) - Single-Stage Visual Relationship Learning using Conditional Queries [60.90880759475021]
TraCQ is a new formulation for scene graph generation that avoids the multi-task learning problem and the entity pair distribution.
We employ a DETR-based encoder-decoder conditional queries to significantly reduce the entity label space as well.
Experimental results show that TraCQ not only outperforms existing single-stage scene graph generation methods, it also beats many state-of-the-art two-stage methods on the Visual Genome dataset.
arXiv Detail & Related papers (2023-06-09T06:02:01Z) - G-MSM: Unsupervised Multi-Shape Matching with Graph-based Affinity
Priors [52.646396621449]
G-MSM is a novel unsupervised learning approach for non-rigid shape correspondence.
We construct an affinity graph on a given set of training shapes in a self-supervised manner.
We demonstrate state-of-the-art performance on several recent shape correspondence benchmarks.
arXiv Detail & Related papers (2022-12-06T12:09:24Z) - TokenFlow: Rethinking Fine-grained Cross-modal Alignment in
Vision-Language Retrieval [30.429340065755436]
We devise a new model-agnostic formulation for fine-grained cross-modal alignment.
Inspired by optimal transport theory, we introduce emphTokenFlow, an instantiation of the proposed scheme.
arXiv Detail & Related papers (2022-09-28T04:11:05Z) - Counterfactual Intervention Feature Transfer for Visible-Infrared Person
Re-identification [69.45543438974963]
We find graph-based methods in the visible-infrared person re-identification task (VI-ReID) suffer from bad generalization because of two issues.
The well-trained input features weaken the learning of graph topology, making it not generalized enough during the inference process.
We propose a Counterfactual Intervention Feature Transfer (CIFT) method to tackle these problems.
arXiv Detail & Related papers (2022-08-01T16:15:31Z) - Decoupled Multi-task Learning with Cyclical Self-Regulation for Face
Parsing [71.19528222206088]
We propose a novel Decoupled Multi-task Learning with Cyclical Self-Regulation for face parsing.
Specifically, DML-CSR designs a multi-task model which comprises face parsing, binary edge, and category edge detection.
Our method achieves the new state-of-the-art performance on the Helen, CelebA-HQ, and LapaMask datasets.
arXiv Detail & Related papers (2022-03-28T02:12:30Z) - Neural Transition System for End-to-End Opinion Role Labeling [13.444895891262844]
Unified opinion role labeling (ORL) aims to detect all possible opinion structures of opinion-holder-target' in one shot, given a text.
We propose a novel solution by revisiting the transition architecture, and augment it with a pointer network (PointNet)
The framework parses out all opinion structures in linear-time complexity, breaks through the limitation of any length of terms with PointNet.
arXiv Detail & Related papers (2021-10-05T12:45:59Z) - Learning a Proposal Classifier for Multiple Object Tracking [36.67900094433032]
We propose a novel proposal-based learnable framework, which models MOT as a proposal generation, proposal scoring and trajectory inference paradigm on an affinity graph.
We experimentally demonstrate that the proposed method achieves a clear performance improvement in both MOTA and IDF1 with respect to previous state-of-the-art on two public benchmarks.
arXiv Detail & Related papers (2021-03-14T10:46:54Z) - Learning an Interpretable Graph Structure in Multi-Task Learning [18.293397644865454]
We present a novel methodology to jointly perform multi-task learning and infer intrinsic relationship among tasks by an interpretable and sparse graph.
Our graph is learned simultaneously with model parameters of each task, thus it reflects the critical relationship among tasks in the specific prediction problem.
arXiv Detail & Related papers (2020-09-11T18:58:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.