Hardware-agnostic Computation for Large-scale Knowledge Graph Embeddings
- URL: http://arxiv.org/abs/2207.08544v1
- Date: Mon, 18 Jul 2022 12:10:27 GMT
- Title: Hardware-agnostic Computation for Large-scale Knowledge Graph Embeddings
- Authors: Caglar Demir and Axel-Cyrille Ngonga Ngomo
- Abstract summary: We develop a framework to compute embeddings for large-scale knowledge graphs in a hardware-agnostic manner.
We provide an open-source version of our framework along with a hub of pre-trained models having more than 11.4 B parameters.
- Score: 1.1650381752104297
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge graph embedding research has mainly focused on learning continuous
representations of knowledge graphs towards the link prediction problem.
Recently developed frameworks can be effectively applied in research related
applications. Yet, these frameworks do not fulfill many requirements of
real-world applications. As the size of the knowledge graph grows, moving
computation from a commodity computer to a cluster of computers in these
frameworks becomes more challenging. Finding suitable hyperparameter settings
w.r.t. time and computational budgets are left to practitioners. In addition,
the continual learning aspect in knowledge graph embedding frameworks is often
ignored, although continual learning plays an important role in many real-world
(deep) learning-driven applications. Arguably, these limitations explain the
lack of publicly available knowledge graph embedding models for large knowledge
graphs. We developed a framework based on the frameworks DASK, Pytorch
Lightning and Hugging Face to compute embeddings for large-scale knowledge
graphs in a hardware-agnostic manner, which is able to address real-world
challenges pertaining to the scale of real application. We provide an
open-source version of our framework along with a hub of pre-trained models
having more than 11.4 B parameters.
Related papers
- GraphAide: Advanced Graph-Assisted Query and Reasoning System [0.04999814847776096]
We introduce an advanced query and reasoning system, GraphAide, which constructs a knowledge graph (KG) from diverse sources and allows to query and reason over the resulting KG.
GraphAide harnesses Large Language Models (LLMs) to rapidly develop domain-specific digital assistants.
arXiv Detail & Related papers (2024-10-29T07:25:30Z) - Resilience in Knowledge Graph Embeddings [1.90894751866253]
We give a unified definition of resilience, encompassing several factors such as generalisation, performance consistency, distribution adaption, and robustness.
Our survey results show that most of the existing works focus on a specific aspect of resilience, namely robustness.
arXiv Detail & Related papers (2024-10-28T16:04:22Z) - Continual Learning on Graphs: Challenges, Solutions, and Opportunities [72.7886669278433]
We provide a comprehensive review of existing continual graph learning (CGL) algorithms.
We compare methods with traditional continual learning techniques and analyze the applicability of the traditional continual learning techniques to forgetting tasks.
We will maintain an up-to-date repository featuring a comprehensive list of accessible algorithms.
arXiv Detail & Related papers (2024-02-18T12:24:45Z) - LasTGL: An Industrial Framework for Large-Scale Temporal Graph Learning [61.4707298969173]
We introduce LasTGL, an industrial framework that integrates unified and unified implementations of common temporal graph learning algorithms.
LasTGL provides comprehensive temporal graph datasets, TGNN models and utilities along with well-documented tutorials.
arXiv Detail & Related papers (2023-11-28T08:45:37Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - A Survey of Knowledge Graph Reasoning on Graph Types: Static, Dynamic,
and Multimodal [57.8455911689554]
Knowledge graph reasoning (KGR) aims to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs)
It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering, recommendation systems, and etc.
arXiv Detail & Related papers (2022-12-12T08:40:04Z) - OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer
Learning for Telepresence Robotics [124.08684545010664]
Scene graph generation from images is a task of great interest to applications such as robotics.
We propose an initial approximation to a framework called Ontology-Guided Scene Graph Generation (OG-SGG)
arXiv Detail & Related papers (2022-02-21T13:23:15Z) - A Survey of Knowledge Graph Embedding and Their Applications [0.17205106391379024]
Knowledge graph embedding enables the real-world application to consume information to improve performance.
This paper introduces growth in the field of KG embedding from simple translation-based models to enrichment-based models.
arXiv Detail & Related papers (2021-07-16T12:07:53Z) - SpikE: spike-based embeddings for multi-relational graph data [0.0]
spiking neural networks are still mostly applied to tasks stemming from sensory processing.
A rich data representation that finds wide application in industry and research is the so-called knowledge graph.
We propose a spike-based algorithm where nodes in a graph are represented by single spike times of neuron populations.
arXiv Detail & Related papers (2021-04-27T18:00:12Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Graphs for deep learning representations [1.0152838128195467]
We introduce a graph formalism based on the recent advances in Graph Signal Processing (GSP)
Namely, we use graphs to represent the latent spaces of deep neural networks.
We showcase that this graph formalism allows us to answer various questions including: ensuring robustness, reducing the amount of arbitrary choices in the design of the learning process, improving to small generalizations added to the inputs, and reducing computational complexity.
arXiv Detail & Related papers (2020-12-14T11:51:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.