Multifaceted Context Representation using Dual Attention for Ontology
Alignment
- URL: http://arxiv.org/abs/2010.11721v2
- Date: Mon, 26 Oct 2020 11:31:42 GMT
- Title: Multifaceted Context Representation using Dual Attention for Ontology
Alignment
- Authors: Vivek Iyer, Arvind Agarwal, Harshit Kumar
- Abstract summary: Ontology alignment is an important research problem that finds application in various fields such as data integration, data transfer, data preparation etc.
We propose VeeAlign, a Deep Learning based model that uses a dual-attention mechanism to compute the contextualized representation of a concept in order to learn alignments.
We validate our approach on various datasets from different domains and in multilingual settings, and show its superior performance over SOTA methods.
- Score: 6.445605125467574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ontology Alignment is an important research problem that finds application in
various fields such as data integration, data transfer, data preparation etc.
State-of-the-art (SOTA) architectures in Ontology Alignment typically use naive
domain-dependent approaches with handcrafted rules and manually assigned
values, making them unscalable and inefficient. Deep Learning approaches for
ontology alignment use domain-specific architectures that are not only
in-extensible to other datasets and domains, but also typically perform worse
than rule-based approaches due to various limitations including over-fitting of
models, sparsity of datasets etc. In this work, we propose VeeAlign, a Deep
Learning based model that uses a dual-attention mechanism to compute the
contextualized representation of a concept in order to learn alignments. By
doing so, not only does our approach exploit both syntactic and semantic
structure of ontologies, it is also, by design, flexible and scalable to
different domains with minimal effort. We validate our approach on various
datasets from different domains and in multilingual settings, and show its
superior performance over SOTA methods.
Related papers
- Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - A Few-Shot Approach for Relation Extraction Domain Adaptation using Large Language Models [1.3927943269211591]
This paper experiments with leveraging in-context learning capabilities of Large Language Models to perform data annotation.
We show that by using a few-shot learning strategy with structured prompts and only minimal expert annotation the presented approach can potentially support domain adaptation of a science KG generation model.
arXiv Detail & Related papers (2024-08-05T11:06:36Z) - Improving Intrusion Detection with Domain-Invariant Representation Learning in Latent Space [4.871119861180455]
We introduce a two-phase representation learning technique using multi-task learning.
We disentangle the latent space by minimizing the mutual information between the prior and latent space.
We assess the model's efficacy across multiple cybersecurity datasets.
arXiv Detail & Related papers (2023-12-28T17:24:13Z) - Multi-Domain Learning From Insufficient Annotations [26.83058974786833]
Multi-domain learning refers to simultaneously constructing a model or a set of models on datasets collected from different domains.
In this paper, we introduce a novel method called multi-domain contrastive learning to alleviate the impact of insufficient annotations.
Experimental results across five datasets demonstrate that MDCL brings noticeable improvement over various SP models.
arXiv Detail & Related papers (2023-05-04T11:50:19Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain
Language Model Compression [53.90578309960526]
Large pre-trained language models (PLMs) have shown overwhelming performances compared with traditional neural network methods.
We propose a hierarchical relational knowledge distillation (HRKD) method to capture both hierarchical and domain relational information.
arXiv Detail & Related papers (2021-10-16T11:23:02Z) - Unsupervised Domain Adaptation for Semantic Segmentation via Low-level
Edge Information Transfer [27.64947077788111]
Unsupervised domain adaptation for semantic segmentation aims to make models trained on synthetic data adapt to real images.
Previous feature-level adversarial learning methods only consider adapting models on the high-level semantic features.
We present the first attempt at explicitly using low-level edge information, which has a small inter-domain gap, to guide the transfer of semantic information.
arXiv Detail & Related papers (2021-09-18T11:51:31Z) - Adapting Segmentation Networks to New Domains by Disentangling Latent
Representations [14.050836886292869]
Domain adaptation approaches have come into play to transfer knowledge acquired on a label-abundant source domain to a related label-scarce target domain.
We propose a novel performance metric to capture the relative efficacy of an adaptation strategy compared to supervised training.
arXiv Detail & Related papers (2021-08-06T09:43:07Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - Towards Domain-Agnostic Contrastive Learning [103.40783553846751]
We propose a novel domain-agnostic approach to contrastive learning, named DACL.
Key to our approach is the use of Mixup noise to create similar and dissimilar examples by mixing data samples differently either at the input or hidden-state levels.
Our results show that DACL not only outperforms other domain-agnostic noising methods, such as Gaussian-noise, but also combines well with domain-specific methods, such as SimCLR.
arXiv Detail & Related papers (2020-11-09T13:41:56Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.