A Comprehensive Survey on Knowledge Graph Entity Alignment via
Representation Learning
- URL: http://arxiv.org/abs/2103.15059v1
- Date: Sun, 28 Mar 2021 06:23:48 GMT
- Title: A Comprehensive Survey on Knowledge Graph Entity Alignment via
Representation Learning
- Authors: Rui Zhang, Bayu Distiawan Trisedy, Miao Li, Yong Jiang, Jianzhong Qi
- Abstract summary: This paper provides a tutorial-type survey on representative entity alignment techniques.
We propose two datasets to address the limitation of existing benchmark datasets.
We conduct extensive experiments using the proposed datasets.
- Score: 39.401580902256626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last few years, the interest in knowledge bases has grown
exponentially in both the research community and the industry due to their
essential role in AI applications. Entity alignment is an important task for
enriching knowledge bases. This paper provides a comprehensive tutorial-type
survey on representative entity alignment techniques that use the new approach
of representation learning. We present a framework for capturing the key
characteristics of these techniques, propose two datasets to address the
limitation of existing benchmark datasets, and conduct extensive experiments
using the proposed datasets. The framework gives a clear picture of how the
techniques work. The experiments yield important results about the empirical
performance of the techniques and how various factors affect the performance.
One important observation not stressed by previous work is that techniques
making good use of attribute triples and relation predicates as features stand
out as winners.
Related papers
- Towards Human-Like Machine Comprehension: Few-Shot Relational Learning in Visually-Rich Documents [16.78371134590167]
Key-value relations are prevalent in Visually-Rich Documents (VRDs)
These non-textual cues serve as important indicators that greatly enhance human comprehension and acquisition of such relation triplets.
Our research focuses on few-shot relational learning, specifically targeting the extraction of key-value relation triplets in VRDs.
arXiv Detail & Related papers (2024-03-23T08:40:35Z) - Capture the Flag: Uncovering Data Insights with Large Language Models [90.47038584812925]
This study explores the potential of using Large Language Models (LLMs) to automate the discovery of insights in data.
We propose a new evaluation methodology based on a "capture the flag" principle, measuring the ability of such models to recognize meaningful and pertinent information (flags) in a dataset.
arXiv Detail & Related papers (2023-12-21T14:20:06Z) - Modeling Entities as Semantic Points for Visual Information Extraction
in the Wild [55.91783742370978]
We propose an alternative approach to precisely and robustly extract key information from document images.
We explicitly model entities as semantic points, i.e., center points of entities are enriched with semantic information describing the attributes and relationships of different entities.
The proposed method can achieve significantly enhanced performance on entity labeling and linking, compared with previous state-of-the-art models.
arXiv Detail & Related papers (2023-03-23T08:21:16Z) - Self-Supervised Representation Learning: Introduction, Advances and
Challenges [125.38214493654534]
Self-supervised representation learning methods aim to provide powerful deep feature learning without the requirement of large annotated datasets.
This article introduces this vibrant area including key concepts, the four main families of approach and associated state of the art, and how self-supervised methods are applied to diverse modalities of data.
arXiv Detail & Related papers (2021-10-18T13:51:22Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Reinforcement Learning with Prototypical Representations [114.35801511501639]
Proto-RL is a self-supervised framework that ties representation learning with exploration through prototypical representations.
These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations.
This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
arXiv Detail & Related papers (2021-02-22T18:56:34Z) - Reasoning over Vision and Language: Exploring the Benefits of
Supplemental Knowledge [59.87823082513752]
This paper investigates the injection of knowledge from general-purpose knowledge bases (KBs) into vision-and-language transformers.
We empirically study the relevance of various KBs to multiple tasks and benchmarks.
The technique is model-agnostic and can expand the applicability of any vision-and-language transformer with minimal computational overhead.
arXiv Detail & Related papers (2021-01-15T08:37:55Z) - Towards a Flexible Embedding Learning Framework [15.604564543883122]
We propose an embedding learning framework that is flexible in terms of the relationships that can be embedded into the learned representations.
A sampling mechanism is carefully designed to establish a direct connection between the input and the information captured by the output embeddings.
Our empirical results demonstrate that the proposed framework, in conjunction with a set of relevant entity-relation-matrices, outperforms the existing state-of-the-art approaches in various data mining tasks.
arXiv Detail & Related papers (2020-09-23T08:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.