Cross-lingual Entity Alignment with Adversarial Kernel Embedding and
Adversarial Knowledge Translation
- URL: http://arxiv.org/abs/2104.07837v1
- Date: Fri, 16 Apr 2021 00:57:28 GMT
- Title: Cross-lingual Entity Alignment with Adversarial Kernel Embedding and
Adversarial Knowledge Translation
- Authors: Gong Zhang, Yang Zhou, Sixing Wu, Zeru Zhang, Dejing Dou
- Abstract summary: Cross-lingual entity alignment often suffers challenges from feature inconsistency to sequence context unawareness.
This paper presents a dual adversarial learning framework for cross-lingual entity alignment, DAEA, with two original contributions.
- Score: 35.77482102674059
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-lingual entity alignment, which aims to precisely connect the same
entities in different monolingual knowledge bases (KBs) together, often suffers
challenges from feature inconsistency to sequence context unawareness. This
paper presents a dual adversarial learning framework for cross-lingual entity
alignment, DAEA, with two original contributions. First, in order to address
the structural and attribute feature inconsistency between entities in two
knowledge graphs (KGs), an adversarial kernel embedding technique is proposed
to extract graph-invariant information in an unsupervised manner, and project
two KGs into the common embedding space. Second, in order to further improve
successful rate of entity alignment, we propose to produce multiple random
walks through each entity to be aligned and mask these entities in random
walks. With the guidance of known aligned entities in the context of multiple
random walks, an adversarial knowledge translation model is developed to fill
and translate masked entities in pairwise random walks from two KGs. Extensive
experiments performed on real-world datasets show that DAEA can well solve the
feature inconsistency and sequence context unawareness issues and significantly
outperforms thirteen state-of-the-art entity alignment methods.
Related papers
- Unsupervised Robust Cross-Lingual Entity Alignment via Neighbor Triple Matching with Entity and Relation Texts [17.477542644785483]
Cross-lingual entity alignment (EA) enables the integration of multiple knowledge graphs (KGs) across different languages.
EA pipeline that jointly performs entity-level and Relation-level Alignment by neighbor triple matching strategy.
arXiv Detail & Related papers (2024-07-22T12:25:48Z) - EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration [63.112790050749695]
We introduce EAGER, a novel generative recommendation framework that seamlessly integrates both behavioral and semantic information.
We validate the effectiveness of EAGER on four public benchmarks, demonstrating its superior performance compared to existing methods.
arXiv Detail & Related papers (2024-06-20T06:21:56Z) - mCL-NER: Cross-Lingual Named Entity Recognition via Multi-view
Contrastive Learning [54.523172171533645]
Cross-lingual named entity recognition (CrossNER) faces challenges stemming from uneven performance due to the scarcity of multilingual corpora.
We propose Multi-view Contrastive Learning for Cross-lingual Named Entity Recognition (mCL-NER)
Our experiments on the XTREME benchmark, spanning 40 languages, demonstrate the superiority of mCL-NER over prior data-driven and model-based approaches.
arXiv Detail & Related papers (2023-08-17T16:02:29Z) - From Alignment to Entailment: A Unified Textual Entailment Framework for
Entity Alignment [17.70562397382911]
Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings.
We transform both triples into unified textual sequences, and model the EA task as a bi-directional textual entailment task.
Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information.
arXiv Detail & Related papers (2023-05-19T08:06:50Z) - Object Segmentation by Mining Cross-Modal Semantics [68.88086621181628]
We propose a novel approach by mining the Cross-Modal Semantics to guide the fusion and decoding of multimodal features.
Specifically, we propose a novel network, termed XMSNet, consisting of (1) all-round attentive fusion (AF), (2) coarse-to-fine decoder (CFD), and (3) cross-layer self-supervision.
arXiv Detail & Related papers (2023-05-17T14:30:11Z) - Type-enhanced Ensemble Triple Representation via Triple-aware Attention
for Cross-lingual Entity Alignment [12.894775396801958]
TTEA -- Type-enhanced Ensemble Triple Representation via Triple-aware Attention for Cross-lingual Entity alignment is proposed.
Our framework uses triple-ware entity enhancement to model the role diversity of triple elements.
Our framework outperforms state-of-the-art methods in experiments on three real-world cross-lingual datasets.
arXiv Detail & Related papers (2023-05-02T15:56:11Z) - IXA/Cogcomp at SemEval-2023 Task 2: Context-enriched Multilingual Named
Entity Recognition using Knowledge Bases [53.054598423181844]
We present a novel NER cascade approach comprising three steps.
We empirically demonstrate the significance of external knowledge bases in accurately classifying fine-grained and emerging entities.
Our system exhibits robust performance in the MultiCoNER2 shared task, even in the low-resource language setting.
arXiv Detail & Related papers (2023-04-20T20:30:34Z) - Cross-lingual Entity Alignment with Incidental Supervision [76.66793175159192]
We propose an incidentally supervised model, JEANS, which jointly represents multilingual KGs and text corpora in a shared embedding scheme.
Experiments on benchmark datasets show that JEANS leads to promising improvement on entity alignment with incidental supervision.
arXiv Detail & Related papers (2020-05-01T01:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.