Runtime Performances Benchmark for Knowledge Graph Embedding Methods
- URL: http://arxiv.org/abs/2011.04275v1
- Date: Thu, 5 Nov 2020 21:58:11 GMT
- Title: Runtime Performances Benchmark for Knowledge Graph Embedding Methods
- Authors: Angelica Sofia Valeriani
- Abstract summary: This paper focuses on providing a characterization of the runtime performances of state-of-the-art implementations of KGE alghoritms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper wants to focus on providing a characterization of the runtime
performances of state-of-the-art implementations of KGE alghoritms, in terms of
memory footprint and execution time. Despite the rapidly growing interest in
KGE methods, so far little attention has been devoted to their comparison and
evaluation; in particular, previous work mainly focused on performance in terms
of accuracy in specific tasks, such as link prediction. To this extent, a
framework is proposed for evaluating available KGE implementations against
graphs with different properties, with a particular focus on the effectiveness
of the adopted optimization strategies. Graphs and models have been trained
leveraging different architectures, in order to enlighten features and
properties of both models and the architectures they have been trained on. Some
results enlightened with experiments in this document are the fact that
multithreading is efficient, but benefit deacreases as the number of threads
grows in case of CPU. GPU proves to be the best architecture for the given
task, even if CPU with some vectorized instructions still behaves well.
Finally, RAM utilization for the loading of the graph never changes between
different architectures and depends only on the type of graph, not on the
model.
Related papers
- GC-Bench: An Open and Unified Benchmark for Graph Condensation [54.70801435138878]
We develop a comprehensive Graph Condensation Benchmark (GC-Bench) to analyze the performance of graph condensation.
GC-Bench systematically investigates the characteristics of graph condensation in terms of the following dimensions: effectiveness, transferability, and complexity.
We have developed an easy-to-use library for training and evaluating different GC methods to facilitate reproducible research.
arXiv Detail & Related papers (2024-06-30T07:47:34Z) - GC4NC: A Benchmark Framework for Graph Condensation on Node Classification with New Insights [30.796414860754837]
Graph condensation (GC) is an emerging technique designed to learn a significantly smaller graph that retains the essential information of the original graph.
This paper introduces textbfGC4NC, a comprehensive framework for evaluating diverse GC methods on node classification.
Our systematic evaluation offers novel insights into how condensed graphs behave and the critical design choices that drive their success.
arXiv Detail & Related papers (2024-06-24T15:17:49Z) - Temporal Graph Benchmark for Machine Learning on Temporal Graphs [54.52243310226456]
Temporal Graph Benchmark (TGB) is a collection of challenging and diverse benchmark datasets.
We benchmark each dataset and find that the performance of common models can vary drastically across datasets.
TGB provides an automated machine learning pipeline for reproducible and accessible temporal graph research.
arXiv Detail & Related papers (2023-07-03T13:58:20Z) - ParaGraph: Weighted Graph Representation for Performance Optimization of
HPC Kernels [1.304892050913381]
We introduce a new graph-based program representation for parallel applications that extends the Abstract Syntax Tree.
We evaluate our proposed representation by training a Graph Neural Network (GNN) to predict the runtime of an OpenMP code region.
Results show that our approach is indeed effective and has normalized RMSE as low as 0.004 to at most 0.01 in its runtime predictions.
arXiv Detail & Related papers (2023-04-07T05:52:59Z) - gSuite: A Flexible and Framework Independent Benchmark Suite for Graph
Neural Network Inference on GPUs [0.0]
We develop a benchmark suite that is framework independent, supporting versatile computational models, easily and can be used with architectural simulators without additional effort.
gSuite enables performing detailed performance characterization studies on GNN Inference using both contemporary GPU profilers and architectural GPU simulators.
We use several evaluation metrics to rigorously measure the performance of GNN computation.
arXiv Detail & Related papers (2022-10-20T21:18:51Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - A Multi-scale Graph Signature for Persistence Diagrams based on Return
Probabilities of Random Walks [1.745838188269503]
We explore the use of a family of multi-scale graph signatures to enhance the robustness of topological features.
We propose a deep learning architecture to handle this set input.
Experiments on benchmark graph classification datasets demonstrate that our proposed architecture outperforms other persistent homology-based methods.
arXiv Detail & Related papers (2022-09-28T17:30:27Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Meta-path Analysis on Spatio-Temporal Graphs for Pedestrian Trajectory
Prediction [6.685013315842084]
We present the Meta-path Enhanced Structural Recurrent Neural Network (MESRNN), a generic framework that can be applied to any-temporal task in a simple and scalable manner.
We employ MESRNN for pedestrian trajectory prediction, utilizing these meta-path based features to capture the relationships between the trajectories of pedestrians at different points in time space.
The proposed model consistently outperforms the baselines in trajectory prediction over long time horizons by over 32%, and produces more socially compliant trajectories in dense crowds.
arXiv Detail & Related papers (2022-02-27T19:09:21Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - Image Matching across Wide Baselines: From Paper to Practice [80.9424750998559]
We introduce a comprehensive benchmark for local features and robust estimation algorithms.
Our pipeline's modular structure allows easy integration, configuration, and combination of different methods.
We show that with proper settings, classical solutions may still outperform the perceived state of the art.
arXiv Detail & Related papers (2020-03-03T15:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.