Cross-lingual Entity Alignment with Incidental Supervision
- URL: http://arxiv.org/abs/2005.00171v2
- Date: Tue, 26 Jan 2021 05:15:45 GMT
- Title: Cross-lingual Entity Alignment with Incidental Supervision
- Authors: Muhao Chen, Weijia Shi, Ben Zhou, Dan Roth
- Abstract summary: We propose an incidentally supervised model, JEANS, which jointly represents multilingual KGs and text corpora in a shared embedding scheme.
Experiments on benchmark datasets show that JEANS leads to promising improvement on entity alignment with incidental supervision.
- Score: 76.66793175159192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much research effort has been put to multilingual knowledge graph (KG)
embedding methods to address the entity alignment task, which seeks to match
entities in different languagespecific KGs that refer to the same real-world
object. Such methods are often hindered by the insufficiency of seed alignment
provided between KGs. Therefore, we propose an incidentally supervised model,
JEANS , which jointly represents multilingual KGs and text corpora in a shared
embedding scheme, and seeks to improve entity alignment with incidental
supervision signals from text. JEANS first deploys an entity grounding process
to combine each KG with the monolingual text corpus. Then, two learning
processes are conducted: (i) an embedding learning process to encode the KG and
text of each language in one embedding space, and (ii) a selflearning based
alignment learning process to iteratively induce the matching of entities and
that of lexemes between embeddings. Experiments on benchmark datasets show that
JEANS leads to promising improvement on entity alignment with incidental
supervision, and significantly outperforms state-of-the-art methods that solely
rely on internal information of KGs.
Related papers
- DIAL: Dense Image-text ALignment for Weakly Supervised Semantic Segmentation [8.422110274212503]
Weakly supervised semantic segmentation approaches typically rely on class activation maps (CAMs) for initial seed generation.
We introduce DALNet, which leverages text embeddings to enhance the comprehensive understanding and precise localization of objects across different levels of granularity.
Our approach, in particular, allows for more efficient end-to-end process as a single-stage method.
arXiv Detail & Related papers (2024-09-24T06:51:49Z) - Two Heads Are Better Than One: Integrating Knowledge from Knowledge
Graphs and Large Language Models for Entity Alignment [31.70064035432789]
We propose a Large Language Model-enhanced Entity Alignment framework (LLMEA)
LLMEA identifies candidate alignments for a given entity by considering both embedding similarities between entities across Knowledge Graphs and edit distances to a virtual equivalent entity.
Experiments conducted on three public datasets reveal that LLMEA surpasses leading baseline models.
arXiv Detail & Related papers (2024-01-30T12:41:04Z) - Unifying Structure and Language Semantic for Efficient Contrastive
Knowledge Graph Completion with Structured Entity Anchors [0.3913403111891026]
The goal of knowledge graph completion (KGC) is to predict missing links in a KG using trained facts that are already known.
We propose a novel method to effectively unify structure information and language semantics without losing the power of inductive reasoning.
arXiv Detail & Related papers (2023-11-07T11:17:55Z) - mCL-NER: Cross-Lingual Named Entity Recognition via Multi-view
Contrastive Learning [54.523172171533645]
Cross-lingual named entity recognition (CrossNER) faces challenges stemming from uneven performance due to the scarcity of multilingual corpora.
We propose Multi-view Contrastive Learning for Cross-lingual Named Entity Recognition (mCL-NER)
Our experiments on the XTREME benchmark, spanning 40 languages, demonstrate the superiority of mCL-NER over prior data-driven and model-based approaches.
arXiv Detail & Related papers (2023-08-17T16:02:29Z) - Hybrid Rule-Neural Coreference Resolution System based on Actor-Critic
Learning [53.73316523766183]
Coreference resolution systems need to tackle two main tasks.
One task is to detect all of the potential mentions, the other is to learn the linking of an antecedent for each possible mention.
We propose a hybrid rule-neural coreference resolution system based on actor-critic learning.
arXiv Detail & Related papers (2022-12-20T08:55:47Z) - Neural Coreference Resolution based on Reinforcement Learning [53.73316523766183]
Coreference resolution systems need to solve two subtasks.
One task is to detect all of the potential mentions, the other is to learn the linking of an antecedent for each possible mention.
We propose a reinforcement learning actor-critic-based neural coreference resolution system.
arXiv Detail & Related papers (2022-12-18T07:36:35Z) - UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding
with Text-to-Text Language Models [170.88745906220174]
We propose the SKG framework, which unifies 21 SKG tasks into a text-to-text format.
We show that UnifiedSKG achieves state-of-the-art performance on almost all of the 21 tasks.
We also use UnifiedSKG to conduct a series of experiments on structured knowledge encoding variants across SKG tasks.
arXiv Detail & Related papers (2022-01-16T04:36:18Z) - Multilingual Knowledge Graph Completion with Joint Relation and Entity
Alignment [32.47122460214232]
We present ALIGNKGC, which uses some seed alignments to jointly optimize all three of KGC, relation alignment and RA losses.
ALIGNKGC achieves 10-32 MRR improvements over a strong state-of-the-art single-KGC system completion model over each monolingual KG.
arXiv Detail & Related papers (2021-04-18T10:27:44Z) - Cross-lingual Entity Alignment with Adversarial Kernel Embedding and
Adversarial Knowledge Translation [35.77482102674059]
Cross-lingual entity alignment often suffers challenges from feature inconsistency to sequence context unawareness.
This paper presents a dual adversarial learning framework for cross-lingual entity alignment, DAEA, with two original contributions.
arXiv Detail & Related papers (2021-04-16T00:57:28Z) - ERICA: Improving Entity and Relation Understanding for Pre-trained
Language Models via Contrastive Learning [97.10875695679499]
We propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text.
Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks.
arXiv Detail & Related papers (2020-12-30T03:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.