Dividing the Ontology Alignment Task with Semantic Embeddings and
Logic-based Modules
- URL: http://arxiv.org/abs/2003.05370v1
- Date: Tue, 25 Feb 2020 14:44:12 GMT
- Title: Dividing the Ontology Alignment Task with Semantic Embeddings and
Logic-based Modules
- Authors: Ernesto Jim\'enez-Ruiz, Asan Agibetov, Jiaoyan Chen, Matthias Samwald,
Valerie Cross
- Abstract summary: This paper presents an approach that combines a embedding model and logic-based modules to accurately divide an input matching task into smaller and more tractable tasks.
The results are encouraging and suggest that the proposed method is adequate in practice and can be integrated within the workflow of systems unable to cope with very large neural datasets.
- Score: 15.904000789557486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large ontologies still pose serious challenges to state-of-the-art ontology
alignment systems. In this paper we present an approach that combines a neural
embedding model and logic-based modules to accurately divide an input ontology
matching task into smaller and more tractable matching (sub)tasks. We have
conducted a comprehensive evaluation using the datasets of the Ontology
Alignment Evaluation Initiative. The results are encouraging and suggest that
the proposed method is adequate in practice and can be integrated within the
workflow of systems unable to cope with very large ontologies.
Related papers
- Universal Topology Refinement for Medical Image Segmentation with Polynomial Feature Synthesis [19.2371330932614]
Medical image segmentation methods often neglect topological correctness, making their segmentations unusable for many downstream tasks.
One option is to retrain such models whilst including a topology-driven loss component.
We present a plug-and-play topology refinement method that is compatible with any domain-specific segmentation pipeline.
arXiv Detail & Related papers (2024-09-15T17:07:58Z) - Towards Complex Ontology Alignment using Large Language Models [1.3218260503808055]
Ontology alignment is a critical process in Web for detecting relationships between different labels and content.
Recent advancements in Large Language Models (LLMs) presents new opportunities for enhancing engineering practices.
This paper investigates the application of LLM technologies to tackle the complex alignment challenge.
arXiv Detail & Related papers (2024-04-16T07:13:22Z) - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation [85.12429517510311]
LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
arXiv Detail & Related papers (2023-08-24T06:50:07Z) - Truveta Mapper: A Zero-shot Ontology Alignment Framework [3.5284865194805106]
A new perspective is suggested for unsupervised Ontology Matching (OM) or Ontology Alignment (OA)
The proposed framework, Truveta Mapper (TM), leverages a multi-task sequence-to-sequence transformer model to perform alignment across multiple in a zero-shot, unified and end-to-end manner.
TM is pre-trained and fine-tuned only on publicly available corpus text and inner-ontologies data.
arXiv Detail & Related papers (2023-01-24T00:32:56Z) - Semantic Probabilistic Layers for Neuro-Symbolic Learning [83.25785999205932]
We design a predictive layer for structured-output prediction (SOP)
It can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space.
arXiv Detail & Related papers (2022-06-01T12:02:38Z) - Semantic Search for Large Scale Clinical Ontologies [63.71950996116403]
We present a deep learning approach to build a search system for large clinical vocabularies.
We propose a Triplet-BERT model and a method that generates training data based on semantic training data.
The model is evaluated using five real benchmark data sets and the results show that our approach achieves high results on both free text to concept and concept to searching concept vocabularies.
arXiv Detail & Related papers (2022-01-01T05:15:42Z) - Partitioned Active Learning for Heterogeneous Systems [5.331649110169476]
We propose the partitioned active learning strategy established upon partitioned GP (PGP) modeling.
Global searching scheme accelerates the exploration aspect of active learning.
Local searching exploits the active learning criterion induced by the local GP model.
arXiv Detail & Related papers (2021-05-14T02:05:31Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.