The Integration of Semantic and Structural Knowledge in Knowledge Graph Entity Typing
- URL: http://arxiv.org/abs/2404.08313v1
- Date: Fri, 12 Apr 2024 08:17:44 GMT
- Title: The Integration of Semantic and Structural Knowledge in Knowledge Graph Entity Typing
- Authors: Muzhi Li, Minda Hu, Irwin King, Ho-fung Leung,
- Abstract summary: Recent works only utilize the textittextbf knowledge in the local neighborhood of entities.
We propose a novel textbfunderlineSemantic and textbfunderlineStructure-aware KG textbfunderlineEntity textbfunderlineTyping(SSET) framework.
- Score: 36.25678234550434
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The Knowledge Graph Entity Typing (KGET) task aims to predict missing type annotations for entities in knowledge graphs. Recent works only utilize the \textit{\textbf{structural knowledge}} in the local neighborhood of entities, disregarding \textit{\textbf{semantic knowledge}} in the textual representations of entities, relations, and types that are also crucial for type inference. Additionally, we observe that the interaction between semantic and structural knowledge can be utilized to address the false-negative problem. In this paper, we propose a novel \textbf{\underline{S}}emantic and \textbf{\underline{S}}tructure-aware KG \textbf{\underline{E}}ntity \textbf{\underline{T}}yping~{(SSET)} framework, which is composed of three modules. First, the \textit{Semantic Knowledge Encoding} module encodes factual knowledge in the KG with a Masked Entity Typing task. Then, the \textit{Structural Knowledge Aggregation} module aggregates knowledge from the multi-hop neighborhood of entities to infer missing types. Finally, the \textit{Unsupervised Type Re-ranking} module utilizes the inference results from the two models above to generate type predictions that are robust to false-negative samples. Extensive experiments show that SSET significantly outperforms existing state-of-the-art methods.
Related papers
- SEMMA: A Semantic Aware Knowledge Graph Foundation Model [34.001718474850676]
Knowledge Graph Foundation Models (KGFMs) have shown promise in enabling zero-shot reasoning over unseen graphs by learning transferable patterns.<n>We introduce SEMMA, a dual-module KGFM that integrates transferable textual semantics alongside structure.<n>We show that in more challenging generalization settings, where the test-time relation vocabulary is entirely unseen, structural methods collapse while SEMMA is 2x more effective.
arXiv Detail & Related papers (2025-05-26T18:15:25Z) - Knowledge Graph Completion with Relation-Aware Anchor Enhancement [50.50944396454757]
We propose a relation-aware anchor enhanced knowledge graph completion method (RAA-KGC)<n>We first generate anchor entities within the relation-aware neighborhood of the head entity.<n>Then, by pulling the query embedding towards the neighborhoods of the anchors, it is tuned to be more discriminative for target entity matching.
arXiv Detail & Related papers (2025-04-08T15:22:08Z) - Hypergraph based Understanding for Document Semantic Entity Recognition [65.84258776834524]
We build a novel hypergraph attention document semantic entity recognition framework, HGA, which uses hypergraph attention to focus on entity boundaries and entity categories at the same time.
Our results on FUNSD, CORD, XFUNDIE show that our method can effectively improve the performance of semantic entity recognition tasks.
arXiv Detail & Related papers (2024-07-09T14:35:49Z) - Hypertext Entity Extraction in Webpage [112.56734676713721]
We introduce a textbfMoE-based textbfEntity textbfExtraction textbfFramework (textitMoEEF), which integrates multiple features to enhance model performance.
We also analyze the effectiveness of hypertext features in textitHEED and several model components in textitMoEEF.
arXiv Detail & Related papers (2024-03-04T03:21:40Z) - Multi-view Contrastive Learning for Entity Typing over Knowledge Graphs [25.399684403558553]
We propose a novel method called Multi-view Contrastive Learning for knowledge graph Entity Typing (MCLET)
MCLET effectively encodes the coarse-grained knowledge provided by clusters into entity and type embeddings.
arXiv Detail & Related papers (2023-10-18T14:41:09Z) - DocTr: Document Transformer for Structured Information Extraction in
Documents [36.1145541816468]
We present a new formulation for structured information extraction from visually rich documents.
It aims to address the limitations of existing IOB tagging or graph-based formulations.
We represent an entity as an anchor word and a bounding box, and represent entity linking as the association between anchor words.
arXiv Detail & Related papers (2023-07-16T02:59:30Z) - OntoType: Ontology-Guided and Pre-Trained Language Model Assisted Fine-Grained Entity Typing [25.516304052884397]
Fine-grained entity typing (FET) assigns entities in text with context-sensitive, fine-grained semantic types.
OntoType follows a type ontological structure, from coarse to fine, ensembles multiple PLM prompting results to generate a set of type candidates.
Our experiments on the Ontonotes, FIGER, and NYT datasets demonstrate that our method outperforms the state-of-the-art zero-shot fine-grained entity typing methods.
arXiv Detail & Related papers (2023-05-21T00:32:37Z) - HiSMatch: Historical Structure Matching based Temporal Knowledge Graph
Reasoning [59.38797474903334]
This paper proposes the textbfHistorical textbfStructure textbfMatching (textbfHiSMatch) model.
It applies two structure encoders to capture the semantic information contained in the historical structures of the query and candidate entities.
Experiments on six benchmark datasets demonstrate the significant improvement of the proposed HiSMatch model, with up to 5.6% performance improvement in MRR, compared to the state-of-the-art baselines.
arXiv Detail & Related papers (2022-10-18T09:39:26Z) - Context-aware Entity Typing in Knowledge Graphs [12.181416235996302]
Knowledge graph entity typing aims to infer entities' missing types in knowledge graphs.
This paper proposes a novel method for this task by utilizing entities' contextual information.
Experiments on two real-world KGs demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2021-09-16T13:59:27Z) - Pre-training Language Model Incorporating Domain-specific Heterogeneous Knowledge into A Unified Representation [49.89831914386982]
We propose a unified pre-trained language model (PLM) for all forms of text, including unstructured text, semi-structured text, and well-structured text.
Our approach outperforms the pre-training of plain text using only 1/4 of the data.
arXiv Detail & Related papers (2021-09-02T16:05:24Z) - KGSynNet: A Novel Entity Synonyms Discovery Framework with Knowledge
Graph [23.053995137917994]
We propose a novel entity synonyms discovery framework, named emphKGSynNet.
Specifically, we pre-train subword embeddings for mentions and entities using a large-scale domain-specific corpus.
We employ a specifically designed emphfusion gate to adaptively absorb the entities' knowledge information into their semantic features.
arXiv Detail & Related papers (2021-03-16T07:32:33Z) - Autoregressive Entity Retrieval [55.38027440347138]
Entities are at the center of how we represent and aggregate knowledge.
The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering.
We propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion.
arXiv Detail & Related papers (2020-10-02T10:13:31Z) - Connecting Embeddings for Knowledge Graph Entity Typing [22.617375045752084]
Knowledge graph (KG) entity typing aims at inferring possible missing entity type instances in KG.
We propose a novel approach for KG entity typing which is trained by jointly utilizing local typing knowledge from existing entity type assertions and global triple knowledge from KGs.
arXiv Detail & Related papers (2020-07-21T15:00:01Z) - Interpretable Entity Representations through Large-Scale Typing [61.4277527871572]
We present an approach to creating entity representations that are human readable and achieve high performance out of the box.
Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types.
We show that it is possible to reduce the size of our type set in a learning-based way for particular domains.
arXiv Detail & Related papers (2020-04-30T23:58:03Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.