Visual Pivoting for (Unsupervised) Entity Alignment
- URL: http://arxiv.org/abs/2009.13603v2
- Date: Thu, 17 Dec 2020 02:18:41 GMT
- Title: Visual Pivoting for (Unsupervised) Entity Alignment
- Authors: Fangyu Liu, Muhao Chen, Dan Roth, Nigel Collier
- Abstract summary: This work studies the use of visual semantic representations to align entities in heterogeneous knowledge graphs (KGs)
We show that the proposed new approach, EVA, creates a holistic entity representation that provides strong signals for cross-graph entity alignment.
- Score: 93.82387952905756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work studies the use of visual semantic representations to align
entities in heterogeneous knowledge graphs (KGs). Images are natural components
of many existing KGs. By combining visual knowledge with other auxiliary
information, we show that the proposed new approach, EVA, creates a holistic
entity representation that provides strong signals for cross-graph entity
alignment. Besides, previous entity alignment methods require human labelled
seed alignment, restricting availability. EVA provides a completely
unsupervised solution by leveraging the visual similarity of entities to create
an initial seed dictionary (visual pivots). Experiments on benchmark data sets
DBP15k and DWY15k show that EVA offers state-of-the-art performance on both
monolingual and cross-lingual entity alignment tasks. Furthermore, we discover
that images are particularly useful to align long-tail KG entities, which
inherently lack the structural contexts necessary for capturing the
correspondences.
Related papers
- DERA: Dense Entity Retrieval for Entity Alignment in Knowledge Graphs [3.500936203815729]
We propose a dense entity retrieval framework for Entity Alignment (EA)
We leverage language models to uniformly encode various features of entities and facilitate nearest entity search across Knowledge Graphs (KGs)
Our approach achieves state-of-the-art performance compared to existing EA methods.
arXiv Detail & Related papers (2024-08-02T10:12:42Z) - Hypergraph based Understanding for Document Semantic Entity Recognition [65.84258776834524]
We build a novel hypergraph attention document semantic entity recognition framework, HGA, which uses hypergraph attention to focus on entity boundaries and entity categories at the same time.
Our results on FUNSD, CORD, XFUNDIE show that our method can effectively improve the performance of semantic entity recognition tasks.
arXiv Detail & Related papers (2024-07-09T14:35:49Z) - Semi-Supervised Learning for Visual Bird's Eye View Semantic
Segmentation [16.3996408206659]
We present a novel semi-supervised framework for visual BEV semantic segmentation to boost performance by exploiting unlabeled images during the training.
A consistency loss that makes full use of unlabeled data is then proposed to constrain the model on not only semantic prediction but also the BEV feature.
Experiments on the nuScenes and Argoverse datasets show that our framework can effectively improve prediction accuracy.
arXiv Detail & Related papers (2023-08-28T12:23:36Z) - Advancing Visual Grounding with Scene Knowledge: Benchmark and Method [74.72663425217522]
Visual grounding (VG) aims to establish fine-grained alignment between vision and language.
Most existing VG datasets are constructed using simple description texts.
We propose a novel benchmark of underlineScene underlineKnowledge-guided underlineVisual underlineGrounding.
arXiv Detail & Related papers (2023-07-21T13:06:02Z) - A Fused Gromov-Wasserstein Framework for Unsupervised Knowledge Graph
Entity Alignment [22.526341223786375]
In this paper, we introduce FGWEA, an unsupervised entity alignment framework that leverages the Fused Gromov-Wasserstein (FGW) distance.
We show that FGWEA surpasses 21 competitive baselines, including cutting-edge supervised entity alignment methods.
arXiv Detail & Related papers (2023-05-11T05:17:54Z) - Visual Named Entity Linking: A New Dataset and A Baseline [61.38231023490981]
We consider a purely Visual-based Named Entity Linking (VNEL) task, where the input only consists of an image.
We propose three different sub-tasks, i.e., visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL)
We present a high-quality human-annotated visual person linking dataset, named WIKIPerson.
arXiv Detail & Related papers (2022-11-09T13:27:50Z) - EventEA: Benchmarking Entity Alignment for Event-centric Knowledge
Graphs [17.27027602556303]
We show that the progress made in the past was due to biased and unchallenging evaluation.
We construct a new dataset with heterogeneous relations and attributes based on event-centric KGs.
As a new approach to this difficult problem, we propose a time-aware literal encoder for entity alignment.
arXiv Detail & Related papers (2022-11-05T05:34:21Z) - Good Visual Guidance Makes A Better Extractor: Hierarchical Visual
Prefix for Multimodal Entity and Relation Extraction [88.6585431949086]
We propose a novel Hierarchical Visual Prefix fusion NeTwork (HVPNeT) for visual-enhanced entity and relation extraction.
We regard visual representation as pluggable visual prefix to guide the textual representation for error insensitive forecasting decision.
Experiments on three benchmark datasets demonstrate the effectiveness of our method, and achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-05-07T02:10:55Z) - Semi-constraint Optimal Transport for Entity Alignment with Dangling
Cases [6.755145435406154]
We propose an unsupervised method called Semi-constraint Optimal Transport for Entity Alignment in Dangling cases (SoTead)
Our main idea is to model the entity alignment between two KGs as an optimal transport problem from one KG's entities to the others.
In the experimental part, we first show the superiority of SoTead on a commonly-used entity alignment dataset.
arXiv Detail & Related papers (2022-03-11T04:20:18Z) - Cross-lingual Entity Alignment with Incidental Supervision [76.66793175159192]
We propose an incidentally supervised model, JEANS, which jointly represents multilingual KGs and text corpora in a shared embedding scheme.
Experiments on benchmark datasets show that JEANS leads to promising improvement on entity alignment with incidental supervision.
arXiv Detail & Related papers (2020-05-01T01:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.