Transformer-based Entity Typing in Knowledge Graphs
- URL: http://arxiv.org/abs/2210.11151v1
- Date: Thu, 20 Oct 2022 10:40:25 GMT
- Title: Transformer-based Entity Typing in Knowledge Graphs
- Authors: Zhiwei Hu, V\'ictor Guti\'errez-Basulto, Zhiliang Xiang, Ru Li, Jeff
Z. Pan
- Abstract summary: We propose a novel Transformer-based Entity Typing approach, effectively encoding the content of neighbors of an entity.
Experiments on two real-world datasets demonstrate the superior performance of TET compared to the state-of-the-art.
- Score: 17.134032162338833
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the knowledge graph entity typing task which aims at inferring
plausible entity types. In this paper, we propose a novel Transformer-based
Entity Typing (TET) approach, effectively encoding the content of neighbors of
an entity. More precisely, TET is composed of three different mechanisms: a
local transformer allowing to infer missing types of an entity by independently
encoding the information provided by each of its neighbors; a global
transformer aggregating the information of all neighbors of an entity into a
single long sequence to reason about more complex entity types; and a context
transformer integrating neighbors content based on their contribution to the
type inference through information exchange between neighbor pairs.
Furthermore, TET uses information about class membership of types to
semantically strengthen the representation of an entity. Experiments on two
real-world datasets demonstrate the superior performance of TET compared to the
state-of-the-art.
Related papers
- COTET: Cross-view Optimal Transport for Knowledge Graph Entity Typing [27.28214706269035]
Knowledge graph entity typing aims to infer missing entity type instances in knowledge graphs.
Previous research has predominantly centered around leveraging contextual information associated with entities.
This paper introduces Cross-view Optimal Transport for knowledge graph Entity Typing.
arXiv Detail & Related papers (2024-05-22T12:53:12Z) - Entity Disambiguation via Fusion Entity Decoding [68.77265315142296]
We propose an encoder-decoder model to disambiguate entities with more detailed entity descriptions.
We observe +1.5% improvements in end-to-end entity linking in the GERBIL benchmark compared with EntQA.
arXiv Detail & Related papers (2024-04-02T04:27:54Z) - Seed-Guided Fine-Grained Entity Typing in Science and Engineering
Domains [51.02035914828596]
We study the task of seed-guided fine-grained entity typing in science and engineering domains.
We propose SEType which first enriches the weak supervision by finding more entities for each seen type from an unlabeled corpus.
It then matches the enriched entities to unlabeled text to get pseudo-labeled samples and trains a textual entailment model that can make inferences for both seen and unseen types.
arXiv Detail & Related papers (2024-01-23T22:36:03Z) - Multi-view Contrastive Learning for Entity Typing over Knowledge Graphs [25.399684403558553]
We propose a novel method called Multi-view Contrastive Learning for knowledge graph Entity Typing (MCLET)
MCLET effectively encodes the coarse-grained knowledge provided by clusters into entity and type embeddings.
arXiv Detail & Related papers (2023-10-18T14:41:09Z) - Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity
Alignment [17.592908862768425]
We propose a novel MMEA transformer, called MoAlign, that hierarchically introduces neighbor features, multi-modal attributes, and entity types.
Taking advantage of the transformer's ability to better integrate multiple information, we design a hierarchical modifiable self-attention block in a transformer encoder.
Our approach outperforms strong competitors and achieves excellent entity alignment performance.
arXiv Detail & Related papers (2023-10-10T07:06:06Z) - From Alignment to Entailment: A Unified Textual Entailment Framework for
Entity Alignment [17.70562397382911]
Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings.
We transform both triples into unified textual sequences, and model the EA task as a bi-directional textual entailment task.
Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information.
arXiv Detail & Related papers (2023-05-19T08:06:50Z) - SIM-Trans: Structure Information Modeling Transformer for Fine-grained
Visual Categorization [59.732036564862796]
We propose the Structure Information Modeling Transformer (SIM-Trans) to incorporate object structure information into transformer for enhancing discriminative representation learning.
The proposed two modules are light-weighted and can be plugged into any transformer network and trained end-to-end easily.
Experiments and analyses demonstrate that the proposed SIM-Trans achieves state-of-the-art performance on fine-grained visual categorization benchmarks.
arXiv Detail & Related papers (2022-08-31T03:00:07Z) - Masked Transformer for Neighhourhood-aware Click-Through Rate Prediction [74.52904110197004]
We propose Neighbor-Interaction based CTR prediction, which put this task into a Heterogeneous Information Network (HIN) setting.
In order to enhance the representation of the local neighbourhood, we consider four types of topological interaction among the nodes.
We conduct comprehensive experiments on two real world datasets and the experimental results show that our proposed method outperforms state-of-the-art CTR models significantly.
arXiv Detail & Related papers (2022-01-25T12:44:23Z) - HittER: Hierarchical Transformers for Knowledge Graph Embeddings [85.93509934018499]
We propose Hitt to learn representations of entities and relations in a complex knowledge graph.
Experimental results show that Hitt achieves new state-of-the-art results on multiple link prediction.
We additionally propose a simple approach to integrate Hitt into BERT and demonstrate its effectiveness on two Freebase factoid answering datasets.
arXiv Detail & Related papers (2020-08-28T18:58:15Z) - Interpretable Entity Representations through Large-Scale Typing [61.4277527871572]
We present an approach to creating entity representations that are human readable and achieve high performance out of the box.
Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types.
We show that it is possible to reduce the size of our type set in a learning-based way for particular domains.
arXiv Detail & Related papers (2020-04-30T23:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.