OTIEA:Ontology-enhanced Triple Intrinsic-Correlation for Cross-lingual
Entity Alignment
- URL: http://arxiv.org/abs/2305.01561v1
- Date: Tue, 2 May 2023 16:03:54 GMT
- Title: OTIEA:Ontology-enhanced Triple Intrinsic-Correlation for Cross-lingual
Entity Alignment
- Authors: Zhishuo Zhang and Chengxiang Tan and Xueyan Zhao and Min Yang and
Chaoqun Jiang
- Abstract summary: Cross-lingual and cross-domain knowledge alignment without sufficient external resources is a fundamental and crucial task.
This paper proposes a novel universal EA framework (OTIEA) based on ontology pair and role enhancement mechanism via triple-aware attention.
The experimental results on three real-world datasets show that our framework achieves a competitive performance compared with baselines.
- Score: 12.054806502375575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-lingual and cross-domain knowledge alignment without sufficient
external resources is a fundamental and crucial task for fusing irregular data.
As the element-wise fusion process aiming to discover equivalent objects from
different knowledge graphs (KGs), entity alignment (EA) has been attracting
great interest from industry and academic research recent years. Most of
existing EA methods usually explore the correlation between entities and
relations through neighbor nodes, structural information and external
resources. However, the complex intrinsic interactions among triple elements
and role information are rarely modeled in these methods, which may lead to the
inadequate illustration for triple. In addition, external resources are usually
unavailable in some scenarios especially cross-lingual and cross-domain
applications, which reflects the little scalability of these methods. To tackle
the above insufficiency, a novel universal EA framework (OTIEA) based on
ontology pair and role enhancement mechanism via triple-aware attention is
proposed in this paper without introducing external resources. Specifically, an
ontology-enhanced triple encoder is designed via mining intrinsic correlations
and ontology pair information instead of independent elements. In addition, the
EA-oriented representations can be obtained in triple-aware entity decoder by
fusing role diversity. Finally, a bidirectional iterative alignment strategy is
deployed to expand seed entity pairs. The experimental results on three
real-world datasets show that our framework achieves a competitive performance
compared with baselines.
Related papers
- EnriCo: Enriched Representation and Globally Constrained Inference for Entity and Relation Extraction [3.579132482505273]
Joint entity and relation extraction plays a pivotal role in various applications, notably in the construction of knowledge graphs.
Existing approaches often fall short of two key aspects: richness of representation and coherence in output structure.
In our work, we introduce EnriCo, which mitigates these shortcomings.
arXiv Detail & Related papers (2024-04-18T20:15:48Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - Feature Decoupling-Recycling Network for Fast Interactive Segmentation [79.22497777645806]
Recent interactive segmentation methods iteratively take source image, user guidance and previously predicted mask as the input.
We propose the Feature Decoupling-Recycling Network (FDRN), which decouples the modeling components based on their intrinsic discrepancies.
arXiv Detail & Related papers (2023-08-07T12:26:34Z) - Multi-Grained Multimodal Interaction Network for Entity Linking [65.30260033700338]
Multimodal entity linking task aims at resolving ambiguous mentions to a multimodal knowledge graph.
We propose a novel Multi-GraIned Multimodal InteraCtion Network $textbf(MIMIC)$ framework for solving the MEL task.
arXiv Detail & Related papers (2023-07-19T02:11:19Z) - From Alignment to Entailment: A Unified Textual Entailment Framework for
Entity Alignment [17.70562397382911]
Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings.
We transform both triples into unified textual sequences, and model the EA task as a bi-directional textual entailment task.
Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information.
arXiv Detail & Related papers (2023-05-19T08:06:50Z) - Type-enhanced Ensemble Triple Representation via Triple-aware Attention
for Cross-lingual Entity Alignment [12.894775396801958]
TTEA -- Type-enhanced Ensemble Triple Representation via Triple-aware Attention for Cross-lingual Entity alignment is proposed.
Our framework uses triple-ware entity enhancement to model the role diversity of triple elements.
Our framework outperforms state-of-the-art methods in experiments on three real-world cross-lingual datasets.
arXiv Detail & Related papers (2023-05-02T15:56:11Z) - UniRel: Unified Representation and Interaction for Joint Relational
Triple Extraction [29.15806644012706]
We propose UniRel to address the challenges of capturing rich correlations between entities and relations.
Specifically, we unify representations of entities and relations by jointly encoding them within a relationald natural language sequence.
With comprehensive experiments on two popular triple extraction datasets, we demonstrate that UniRel is more effective computationally efficient.
arXiv Detail & Related papers (2022-11-16T16:53:13Z) - Towards Realistic Low-resource Relation Extraction: A Benchmark with
Empirical Baseline Study [51.33182775762785]
This paper presents an empirical study to build relation extraction systems in low-resource settings.
We investigate three schemes to evaluate the performance in low-resource settings: (i) different types of prompt-based methods with few-shot labeled data; (ii) diverse balancing methods to address the long-tailed distribution issue; and (iii) data augmentation technologies and self-training to generate more labeled in-domain data.
arXiv Detail & Related papers (2022-10-19T15:46:37Z) - PRGC: Potential Relation and Global Correspondence Based Joint
Relational Triple Extraction [23.998135821388203]
We propose a joint relational triple extraction framework based on Potential Relation and Global Correspondence (PRGC)
PRGC achieves state-of-the-art performance on public benchmarks with higher efficiency and delivers consistent performance gain on complex scenarios of overlapping triples.
arXiv Detail & Related papers (2021-06-18T03:38:07Z) - Link Prediction on N-ary Relational Data Based on Relatedness Evaluation [61.61555159755858]
We propose a method called NaLP to conduct link prediction on n-ary relational data.
We represent each n-ary relational fact as a set of its role and role-value pairs.
Experimental results validate the effectiveness and merits of the proposed methods.
arXiv Detail & Related papers (2021-04-21T09:06:54Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.