KGTuner: Efficient Hyper-parameter Search for Knowledge Graph Learning
- URL: http://arxiv.org/abs/2205.02460v1
- Date: Thu, 5 May 2022 06:09:14 GMT
- Title: KGTuner: Efficient Hyper-parameter Search for Knowledge Graph Learning
- Authors: Yongqi Zhang and Zhanke Zhou and Quanming Yao and Yong Li
- Abstract summary: We propose an efficient two-stage search algorithm, which efficiently explores HP configurations on small subgraph.
Experiments show that our method can consistently find better HPs than the baseline algorithms within the same time budget.
- Score: 36.97957745114711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While hyper-parameters (HPs) are important for knowledge graph (KG) learning,
existing methods fail to search them efficiently. To solve this problem, we
first analyze the properties of different HPs and measure the transfer ability
from small subgraph to the full graph. Based on the analysis, we propose an
efficient two-stage search algorithm KGTuner, which efficiently explores HP
configurations on small subgraph at the first stage and transfers the
top-performed configurations for fine-tuning on the large full graph at the
second stage. Experiments show that our method can consistently find better HPs
than the baseline algorithms within the same time budget, which achieves
{9.1\%} average relative improvement for four embedding models on the
large-scale KGs in open graph benchmark.
Related papers
- Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Less is More: One-shot Subgraph Reasoning on Large-scale Knowledge Graphs [49.547988001231424]
We propose the one-shot-subgraph link prediction to achieve efficient and adaptive prediction.
Design principle is that, instead of directly acting on the whole KG, the prediction procedure is decoupled into two steps.
We achieve promoted efficiency and leading performances on five large-scale benchmarks.
arXiv Detail & Related papers (2024-03-15T12:00:12Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Efficiently Learning the Graph for Semi-supervised Learning [4.518012967046983]
We show how to learn the best graphs from the sparse families efficiently using the conjugate gradient method.
Our approach can also be used to learn the graph efficiently online with sub-linear regret, under mild smoothness assumptions.
We implement our approach and demonstrate significant ($sim$10-100x) speedups over prior work on semi-supervised learning with learned graphs on benchmark datasets.
arXiv Detail & Related papers (2023-06-12T13:22:06Z) - Behavior of Hyper-Parameters for Selected Machine Learning Algorithms:
An Empirical Investigation [3.441021278275805]
Hyper- Parameters (HPs) are an important part of machine learning (ML) model development and can greatly influence performance.
This paper studies their behavior for three algorithms: Extreme Gradient Boosting (XGB), Random Forest (RF), and Feedforward Neural Network (FFNN) with structured data.
Our empirical investigation examines the qualitative behavior of model performance as the HPs vary, quantifies the importance of each HP for different ML algorithms, and stability of the performance near the optimal region.
arXiv Detail & Related papers (2022-11-15T22:14:52Z) - Start Small, Think Big: On Hyperparameter Optimization for Large-Scale
Knowledge Graph Embeddings [4.3400407844815]
We introduce an efficient multi-fidelity HPO algorithm for large-scale knowledge graphs.
GraSH obtains state-of-the-art results on large graphs at a low cost.
arXiv Detail & Related papers (2022-07-11T16:07:16Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Genealogical Population-Based Training for Hyperparameter Optimization [1.0514231683620516]
We experimentally demonstrate that our method cuts down by 2 to 3 times the computational cost required.
Our method is search-algorithm so that the inner search routine can be any search algorithm like TPE, GP, CMA or random search.
arXiv Detail & Related papers (2021-09-30T08:49:41Z) - A Note on Graph-Based Nearest Neighbor Search [4.38837720322254]
We show that high clustering coefficient makes most of the k nearest neighbors of q sit in a maximum strongly connected component ( SCC) in the graph.
We prove that the commonly used graph-based search algorithm is guaranteed to traverse the maximum SCC once visiting any point in it.
arXiv Detail & Related papers (2020-12-21T02:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.