Joint embedding in Hierarchical distance and semantic representation
learning for link prediction
- URL: http://arxiv.org/abs/2303.15655v1
- Date: Tue, 28 Mar 2023 00:42:29 GMT
- Title: Joint embedding in Hierarchical distance and semantic representation
learning for link prediction
- Authors: Jin Liu and Jianye Chen and Chongfeng Fan and Fengyu Zhou
- Abstract summary: We propose a novel knowledge graph embedding model for the link prediction task, namely, HIE.
HIE models each triplet (textith, textitr, textitt) into distance measurement space and semantic measurement space, simultaneously.
HIE is introduced into hierarchical-aware space to leverage rich hierarchical information of entities and relations for better representation learning.
- Score: 4.18621837986466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The link prediction task aims to predict missing entities or relations in the
knowledge graph and is essential for the downstream application. Existing
well-known models deal with this task by mainly focusing on representing
knowledge graph triplets in the distance space or semantic space. However, they
can not fully capture the information of head and tail entities, nor even make
good use of hierarchical level information. Thus, in this paper, we propose a
novel knowledge graph embedding model for the link prediction task, namely,
HIE, which models each triplet (\textit{h}, \textit{r}, \textit{t}) into
distance measurement space and semantic measurement space, simultaneously.
Moreover, HIE is introduced into hierarchical-aware space to leverage rich
hierarchical information of entities and relations for better representation
learning. Specifically, we apply distance transformation operation on the head
entity in distance space to obtain the tail entity instead of translation-based
or rotation-based approaches. Experimental results of HIE on four real-world
datasets show that HIE outperforms several existing state-of-the-art knowledge
graph embedding methods on the link prediction task and deals with complex
relations accurately.
Related papers
- Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - A Condensed Transition Graph Framework for Zero-shot Link Prediction with Large Language Models [20.220781775335645]
We introduce a Condensed Transition Graph Framework for Zero-Shot Link Prediction (CTLP)
CTLP encodes all the paths' information in linear time complexity to predict unseen relations between entities.
Our proposed CTLP method achieves state-of-the-art performance on three standard ZSLP datasets.
arXiv Detail & Related papers (2024-02-16T16:02:33Z) - ReVoLT: Relational Reasoning and Voronoi Local Graph Planning for
Target-driven Navigation [1.0896567381206714]
Embodied AI is an inevitable trend that emphasizes the interaction between intelligent entities and the real world.
Recent works focus on exploiting layout relationships by graph neural networks (GNNs)
We decouple this task and propose ReVoLT, a hierarchical framework.
arXiv Detail & Related papers (2023-01-06T05:19:56Z) - ConstGCN: Constrained Transmission-based Graph Convolutional Networks
for Document-level Relation Extraction [24.970508961370548]
Document-level relation extraction with graph neural networks faces a fundamental graph construction gap between training and inference.
We propose $textbfConstGCN$, a novel graph convolutional network which performs knowledge-based information propagation between entities.
Experimental results show that our method outperforms the previous state-of-the-art (SOTA) approaches on the DocRE dataset.
arXiv Detail & Related papers (2022-10-08T07:36:04Z) - S$^2$Contact: Graph-based Network for 3D Hand-Object Contact Estimation
with Semi-Supervised Learning [70.72037296392642]
We propose a novel semi-supervised framework that allows us to learn contact from monocular images.
Specifically, we leverage visual and geometric consistency constraints in large-scale datasets for generating pseudo-labels.
We show benefits from using a contact map that rules hand-object interactions to produce more accurate reconstructions.
arXiv Detail & Related papers (2022-08-01T14:05:23Z) - KGRefiner: Knowledge Graph Refinement for Improving Accuracy of
Translational Link Prediction Methods [4.726777092009553]
This paper proposes a method for refining the knowledge graph.
It makes the knowledge graph more informative, and link prediction operations can be performed more accurately.
Our experiments show that our method can significantly increase the performance of translational link prediction methods.
arXiv Detail & Related papers (2021-06-27T13:32:39Z) - Mutual Graph Learning for Camouflaged Object Detection [31.422775969808434]
A major challenge is that intrinsic similarities between foreground objects and background surroundings make the features extracted by deep model indistinguishable.
We design a novel Mutual Graph Learning model, which generalizes the idea of conventional mutual learning from regular grids to the graph domain.
In contrast to most mutual learning approaches that use a shared function to model all between-task interactions, MGL is equipped with typed functions for handling different complementary relations.
arXiv Detail & Related papers (2021-04-03T10:14:39Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - ConsNet: Learning Consistency Graph for Zero-Shot Human-Object
Interaction Detection [101.56529337489417]
We consider the problem of Human-Object Interaction (HOI) Detection, which aims to locate and recognize HOI instances in the form of human, action, object> in images.
We argue that multi-level consistencies among objects, actions and interactions are strong cues for generating semantic representations of rare or previously unseen HOIs.
Our model takes visual features of candidate human-object pairs and word embeddings of HOI labels as inputs, maps them into visual-semantic joint embedding space and obtains detection results by measuring their similarities.
arXiv Detail & Related papers (2020-08-14T09:11:18Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.