Convolutional Complex Knowledge Graph Embeddings
- URL: http://arxiv.org/abs/2008.03130v3
- Date: Wed, 9 Jun 2021 12:25:01 GMT
- Title: Convolutional Complex Knowledge Graph Embeddings
- Authors: Caglar Demir and Axel-Cyrille Ngonga Ngomo
- Abstract summary: We present a new approach called ConEx, which infers missing links by leveraging a 2D convolution with a Hermitian inner product of complex-valued embedding vectors.
We evaluate ConEx against state-of-the-art approaches on the WN18RR, FB15K-237, KINSHIP and UMLS benchmark datasets.
- Score: 1.1650381752104297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of learning continuous vector
representations of knowledge graphs for predicting missing links. We present a
new approach called ConEx, which infers missing links by leveraging the
composition of a 2D convolution with a Hermitian inner product of
complex-valued embedding vectors. We evaluate ConEx against state-of-the-art
approaches on the WN18RR, FB15K-237, KINSHIP and UMLS benchmark datasets. Our
experimental results show that ConEx achieves a performance superior to that of
state-of-the-art approaches such as RotatE, QuatE and TuckER on the link
prediction task on all datasets while requiring at least 8 times fewer
parameters. We ensure the reproducibility of our results by providing an
open-source implementation which includes the training, evaluation scripts
along with pre-trained models at https://github.com/conex-kge/ConEx.
Related papers
- Boot and Switch: Alternating Distillation for Zero-Shot Dense Retrieval [50.47192086219752]
$texttABEL$ is a simple but effective unsupervised method to enhance passage retrieval in zero-shot settings.
By either fine-tuning $texttABEL$ on labelled data or integrating it with existing supervised dense retrievers, we achieve state-of-the-art results.
arXiv Detail & Related papers (2023-11-27T06:22:57Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - ExpressivE: A Spatio-Functional Embedding For Knowledge Graph Completion [78.8942067357231]
ExpressivE embeds pairs of entities as points and relations as hyper-parallelograms in the virtual triple space.
We show that ExpressivE is competitive with state-of-the-art KGEs and even significantly outperforms them on W18RR.
arXiv Detail & Related papers (2022-06-08T23:34:39Z) - Kronecker Decomposition for Knowledge Graph Embeddings [5.49810117202384]
We propose a technique based on Kronecker decomposition to reduce the number of parameters in a knowledge graph embedding model.
The decomposition ensures that elementwise interactions between three embedding vectors are extended with interactions within each embedding vector.
Our experiments suggest that applying Kronecker decomposition on embedding matrices leads to an improved parameter efficiency on all benchmark datasets.
arXiv Detail & Related papers (2022-05-13T11:11:03Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Causal Incremental Graph Convolution for Recommender System Retraining [89.25922726558875]
Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
arXiv Detail & Related papers (2021-08-16T04:20:09Z) - Convolutional Hypercomplex Embeddings for Link Prediction [2.6209112069534046]
We propose QMult, OMult, ConvQ and ConvO to tackle the link prediction problem.
QMult, OMult, ConvQ and ConvO build upon QMult and OMult by including convolution operations in a way inspired by the residual learning framework.
We evaluate our approaches on seven link prediction datasets including WN18RR, FB15K-237 and YAGO3-10.
arXiv Detail & Related papers (2021-06-29T10:26:51Z) - Two Training Strategies for Improving Relation Extraction over Universal
Graph [36.06238013119114]
This paper explores how the Distantly Supervised Relation Extraction (DS-RE) can benefit from the use of a Universal Graph (UG) and a Knowledge Graph (KG)
We first report that this degradation is associated with the difficulty in learning a UG and then propose two training strategies.
Experimental results on both biomedical and NYT10 datasets prove the robustness of our methods and achieve a new state-of-the-art result on the NYT10 dataset.
arXiv Detail & Related papers (2021-02-12T14:09:35Z) - Bootstrapping Relation Extractors using Syntactic Search by Examples [47.11932446745022]
We propose a process for bootstrapping training datasets which can be performed quickly by non-NLP-experts.
We take advantage of search engines over syntactic-graphs which expose a friendly by-example syntax.
We show that the resulting models are competitive with models trained on manually annotated data and on data obtained from distant supervision.
arXiv Detail & Related papers (2021-02-09T18:17:59Z) - Bringing Light Into the Dark: A Large-scale Evaluation of Knowledge
Graph Embedding Models Under a Unified Framework [31.35912529064612]
We re-implemented and evaluated 21 interaction models in the PyKEEN software package.
We performed a large-scale benchmarking on four datasets with several thousands of experiments and 24,804 GPU hours of time.
arXiv Detail & Related papers (2020-06-23T22:30:52Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.