Matching with Transformers in MELT
- URL: http://arxiv.org/abs/2109.07401v1
- Date: Wed, 15 Sep 2021 16:07:43 GMT
- Title: Matching with Transformers in MELT
- Authors: Sven Hertling, Jan Portisch, Heiko Paulheim
- Abstract summary: We provide an easy to use implementation in the MELT framework which is suited for ontology and knowledge graph matching.
We show that a transformer-based filter helps to choose the correct correspondences given a high-recall alignment.
- Score: 1.2891210250935146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the strongest signals for automated matching of ontologies and
knowledge graphs are the textual descriptions of the concepts. The methods that
are typically applied (such as character- or token-based comparisons) are
relatively simple, and therefore do not capture the actual meaning of the
texts. With the rise of transformer-based language models, text comparison
based on meaning (rather than lexical features) is possible. In this paper, we
model the ontology matching task as classification problem and present
approaches based on transformer models. We further provide an easy to use
implementation in the MELT framework which is suited for ontology and knowledge
graph matching. We show that a transformer-based filter helps to choose the
correct correspondences given a high-recall alignment and already achieves a
good result with simple alignment post-processing methods.
Related papers
- Evaluating Semantic Variation in Text-to-Image Synthesis: A Causal Perspective [50.261681681643076]
We propose a novel metric called SemVarEffect and a benchmark named SemVarBench to evaluate the causality between semantic variations in inputs and outputs in text-to-image synthesis.
Our work establishes an effective evaluation framework that advances the T2I synthesis community's exploration of human instruction understanding.
arXiv Detail & Related papers (2024-10-14T08:45:35Z) - Algorithmic Capabilities of Random Transformers [49.73113518329544]
We investigate what functions can be learned by randomly transformers in which only the embedding layers are optimized.
We find that these random transformers can perform a wide range of meaningful algorithmic tasks.
Our results indicate that some algorithmic capabilities are present in transformers even before these models are trained.
arXiv Detail & Related papers (2024-10-06T06:04:23Z) - Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - LRANet: Towards Accurate and Efficient Scene Text Detection with
Low-Rank Approximation Network [63.554061288184165]
We propose a novel parameterized text shape method based on low-rank approximation.
By exploring the shape correlation among different text contours, our method achieves consistency, compactness, simplicity, and robustness in shape representation.
We implement an accurate and efficient arbitrary-shaped text detector named LRANet.
arXiv Detail & Related papers (2023-06-27T02:03:46Z) - Learning Accurate Template Matching with Differentiable Coarse-to-Fine
Correspondence Refinement [28.00275083733545]
We propose an accurate template matching method based on differentiable coarse-to-fine correspondence refinement.
An initial warp is estimated using coarse correspondences based on novel structure-aware information provided by transformers.
Our method is significantly better than state-of-the-art methods and baselines, providing good generalization ability and visually plausible results even on unseen real data.
arXiv Detail & Related papers (2023-03-15T08:24:10Z) - TokenFlow: Rethinking Fine-grained Cross-modal Alignment in
Vision-Language Retrieval [30.429340065755436]
We devise a new model-agnostic formulation for fine-grained cross-modal alignment.
Inspired by optimal transport theory, we introduce emphTokenFlow, an instantiation of the proposed scheme.
arXiv Detail & Related papers (2022-09-28T04:11:05Z) - Mixed-effects transformers for hierarchical adaptation [1.9105318290910576]
We introduce the mixed-effects transformer (MET), a novel approach for learning hierarchically-structured prefixes.
We show how the popular class of mixed-effects models may be extended to transformer-based architectures.
arXiv Detail & Related papers (2022-05-03T19:34:15Z) - KERMIT - A Transformer-Based Approach for Knowledge Graph Matching [1.9981375888949477]
One of the strongest signals for automated matching of knowledge graphs and textual concept descriptions are concept descriptions.
We show that performing pairwise comparisons of all textual descriptions of concepts in two knowledge graphs is expensive and scales quadratically.
We first generate matching candidates using a pre-trained sentence transformer.
In a second step, we use fine-tuned transformer cross-encoders to generate the best candidates.
arXiv Detail & Related papers (2022-04-29T08:07:17Z) - Text Revision by On-the-Fly Representation Optimization [76.11035270753757]
Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems.
We present an iterative in-place editing approach for text revision, which requires no parallel data.
It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification.
arXiv Detail & Related papers (2022-04-15T07:38:08Z) - Transformer-F: A Transformer network with effective methods for learning
universal sentence representation [8.225067988604351]
The Transformer model is widely used in natural language processing for sentence representation.
In this paper, two approaches are introduced to improve the performance of Transformers.
arXiv Detail & Related papers (2021-07-02T03:20:11Z) - Match-Ignition: Plugging PageRank into Transformer for Long-form Text
Matching [66.71886789848472]
We propose a novel hierarchical noise filtering model, namely Match-Ignition, to tackle the effectiveness and efficiency problem.
The basic idea is to plug the well-known PageRank algorithm into the Transformer, to identify and filter both sentence and word level noisy information.
Noisy sentences are usually easy to detect because the sentence is the basic unit of a long-form text, so we directly use PageRank to filter such information.
arXiv Detail & Related papers (2021-01-16T10:34:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.