HyperFormer: Enhancing Entity and Relation Interaction for
Hyper-Relational Knowledge Graph Completion
- URL: http://arxiv.org/abs/2308.06512v1
- Date: Sat, 12 Aug 2023 09:31:43 GMT
- Title: HyperFormer: Enhancing Entity and Relation Interaction for
Hyper-Relational Knowledge Graph Completion
- Authors: Zhiwei Hu, V\'ictor Guti\'errez-Basulto, Zhiliang Xiang, Ru Li, Jeff
Z. Pan
- Abstract summary: Hyper-relational knowledge graphs (HKGs) extend standard knowledge graphs by associating attribute-value qualifiers to triples.
We propose HyperFormer, a model that considers local-level sequential information, which encodes the content of the entities, relations and qualifiers of a triple.
- Score: 25.399684403558553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyper-relational knowledge graphs (HKGs) extend standard knowledge graphs by
associating attribute-value qualifiers to triples, which effectively represent
additional fine-grained information about its associated triple.
Hyper-relational knowledge graph completion (HKGC) aims at inferring unknown
triples while considering its qualifiers. Most existing approaches to HKGC
exploit a global-level graph structure to encode hyper-relational knowledge
into the graph convolution message passing process. However, the addition of
multi-hop information might bring noise into the triple prediction process. To
address this problem, we propose HyperFormer, a model that considers
local-level sequential information, which encodes the content of the entities,
relations and qualifiers of a triple. More precisely, HyperFormer is composed
of three different modules: an entity neighbor aggregator module allowing to
integrate the information of the neighbors of an entity to capture different
perspectives of it; a relation qualifier aggregator module to integrate
hyper-relational knowledge into the corresponding relation to refine the
representation of relational content; a convolution-based bidirectional
interaction module based on a convolutional operation, capturing pairwise
bidirectional interactions of entity-relation, entity-qualifier, and
relation-qualifier. realize the depth perception of the content related to the
current statement. Furthermore, we introduce a Mixture-of-Experts strategy into
the feed-forward layers of HyperFormer to strengthen its representation
capabilities while reducing the amount of model parameters and computation.
Extensive experiments on three well-known datasets with four different
conditions demonstrate HyperFormer's effectiveness. Datasets and code are
available at https://github.com/zhiweihu1103/HKGC-HyperFormer.
Related papers
- HyperMono: A Monotonicity-aware Approach to Hyper-Relational Knowledge Representation [27.28214706269035]
In a hyper-relational knowledge graph (HKG), each fact is composed of a main triple associated with attribute-value qualifiers, which express additional factual knowledge.
This paper proposes the HyperMono model for hyper-relational knowledge graph completion, which realizes stage reasoning and qualifier monotonicity.
arXiv Detail & Related papers (2024-04-15T15:00:17Z) - Hyperedge Interaction-aware Hypergraph Neural Network [11.359757898963284]
HeIHNN is a hyperedge interaction-aware hypergraph neural network.
We introduce a novel mechanism to enhance information flow between hyperedges and nodes.
arXiv Detail & Related papers (2024-01-28T07:05:30Z) - Hypergraph Transformer for Semi-Supervised Classification [50.92027313775934]
We propose a novel hypergraph learning framework, HyperGraph Transformer (HyperGT)
HyperGT uses a Transformer-based neural network architecture to effectively consider global correlations among all nodes and hyperedges.
It achieves comprehensive hypergraph representation learning by effectively incorporating global interactions while preserving local connectivity patterns.
arXiv Detail & Related papers (2023-12-18T17:50:52Z) - Joint Entity and Relation Extraction with Span Pruning and Hypergraph
Neural Networks [58.43972540643903]
We propose HyperGraph neural network for ERE ($hgnn$), which is built upon the PL-marker (a state-of-the-art marker-based pipleline model)
To alleviate error propagation,we use a high-recall pruner mechanism to transfer the burden of entity identification and labeling from the NER module to the joint module of our model.
Experiments on three widely used benchmarks for ERE task show significant improvements over the previous state-of-the-art PL-marker.
arXiv Detail & Related papers (2023-10-26T08:36:39Z) - HIORE: Leveraging High-order Interactions for Unified Entity Relation
Extraction [85.80317530027212]
We propose HIORE, a new method for unified entity relation extraction.
The key insight is to leverage the complex association among word pairs, which contains richer information than the first-order word-by-word interactions.
Experiments show that HIORE achieves the state-of-the-art performance on relation extraction and an improvement of 1.11.8 F1 points over the prior best unified model.
arXiv Detail & Related papers (2023-05-07T14:57:42Z) - A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach [59.89749342550104]
We propose the task of hyper-relational extraction to extract more specific and complete facts from text.
Existing models cannot perform hyper-relational extraction as it requires a model to consider the interaction between three entities.
We propose CubeRE, a cube-filling model inspired by table-filling approaches and explicitly considers the interaction between relation triplets and qualifiers.
arXiv Detail & Related papers (2022-11-18T03:51:28Z) - Learning Representations for Hyper-Relational Knowledge Graphs [35.380689788802776]
We design a framework to learn representations for hyper-relational facts using multiple aggregators.
Experiments demonstrate the effectiveness of our framework across multiple datasets.
We conduct an ablation study that validates the importance of the various components in our framework.
arXiv Detail & Related papers (2022-08-30T15:02:14Z) - CaEGCN: Cross-Attention Fusion based Enhanced Graph Convolutional
Network for Clustering [51.62959830761789]
We propose a cross-attention based deep clustering framework, named Cross-Attention Fusion based Enhanced Graph Convolutional Network (CaEGCN)
CaEGCN contains four main modules: cross-attention fusion, Content Auto-encoder, Graph Convolutional Auto-encoder and self-supervised model.
Experimental results on different types of datasets prove the superiority and robustness of the proposed CaEGCN.
arXiv Detail & Related papers (2021-01-18T05:21:59Z) - Message Passing for Hyper-Relational Knowledge Graphs [7.733963597282456]
We propose a message passing graph encoder - StarE capable of modeling such hyper-relational knowledge graphs.
StarE can encode an arbitrary number of additional information (qualifiers) along with the main triple while keeping the semantic roles of qualifiers and triples intact.
Our experiments demonstrate that StarE based LP model outperforms existing approaches across multiple benchmarks.
arXiv Detail & Related papers (2020-09-22T22:38:54Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.