Repurposing Knowledge Graph Embeddings for Triple Representation via
Weak Supervision
- URL: http://arxiv.org/abs/2208.10328v1
- Date: Mon, 22 Aug 2022 14:07:08 GMT
- Title: Repurposing Knowledge Graph Embeddings for Triple Representation via
Weak Supervision
- Authors: Alexander Kalinowski and Yuan An
- Abstract summary: Current methods learn triple embeddings from scratch without utilizing entity and predicate embeddings from pre-trained models.
We develop a method for automatically sampling triples from a knowledge graph and estimating their pairwise similarities from pre-trained embedding models.
These pairwise similarity scores are then fed to a Siamese-like neural architecture to fine-tune triple representations.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The majority of knowledge graph embedding techniques treat entities and
predicates as separate embedding matrices, using aggregation functions to build
a representation of the input triple. However, these aggregations are lossy,
i.e. they do not capture the semantics of the original triples, such as
information contained in the predicates. To combat these shortcomings, current
methods learn triple embeddings from scratch without utilizing entity and
predicate embeddings from pre-trained models. In this paper, we design a novel
fine-tuning approach for learning triple embeddings by creating weak
supervision signals from pre-trained knowledge graph embeddings. We develop a
method for automatically sampling triples from a knowledge graph and estimating
their pairwise similarities from pre-trained embedding models. These pairwise
similarity scores are then fed to a Siamese-like neural architecture to
fine-tune triple representations. We evaluate the proposed method on two widely
studied knowledge graphs and show consistent improvement over other
state-of-the-art triple embedding methods on triple classification and triple
clustering tasks.
Related papers
- Self-supervised Learning of Dense Hierarchical Representations for Medical Image Segmentation [2.2265038612930663]
This paper demonstrates a self-supervised framework for learning voxel-wise coarse-to-fine representations tailored for dense downstream tasks.
We devise a training strategy that balances the contributions of features from multiple scales, ensuring that the learned representations capture both coarse and fine-grained details.
arXiv Detail & Related papers (2024-01-12T09:47:17Z) - Walk-and-Relate: A Random-Walk-based Algorithm for Representation
Learning on Sparse Knowledge Graphs [5.444459446244819]
We propose an efficient method to augment the number of triplets to address the problem of data sparsity.
We also provide approaches to accurately and efficiently filter out informative metapaths from the possible set of metapaths.
The proposed approaches are model-agnostic, and the augmented training dataset can be used with any KG embedding approach out of the box.
arXiv Detail & Related papers (2022-09-19T05:35:23Z) - A Review of Knowledge Graph Completion [0.0]
Information extraction methods proved to be effective at triple extraction from structured or unstructured data.
Most of the current knowledge graphs are incomplete.
In order to use KGs in downstream tasks, it is desirable to predict missing links in KGs.
arXiv Detail & Related papers (2022-08-24T16:42:59Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Integrating Semantics and Neighborhood Information with Graph-Driven
Generative Models for Document Retrieval [51.823187647843945]
In this paper, we encode the neighborhood information with a graph-induced Gaussian distribution, and propose to integrate the two types of information with a graph-driven generative model.
Under the approximation, we prove that the training objective can be decomposed into terms involving only singleton or pairwise documents, enabling the model to be trained as efficiently as uncorrelated ones.
arXiv Detail & Related papers (2021-05-27T11:29:03Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Hierarchical and Unsupervised Graph Representation Learning with
Loukas's Coarsening [9.12816196758482]
We propose a novel for unsupervised graph representation learning with attributed graphs.
We show that our algorithm is competitive with state of the art among unsupervised representation learning methods.
arXiv Detail & Related papers (2020-07-07T12:04:38Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.