A Novel Genetic Algorithm with Hierarchical Evaluation Strategy for
Hyperparameter Optimisation of Graph Neural Networks
- URL: http://arxiv.org/abs/2101.09300v2
- Date: Tue, 26 Jan 2021 11:38:54 GMT
- Title: A Novel Genetic Algorithm with Hierarchical Evaluation Strategy for
Hyperparameter Optimisation of Graph Neural Networks
- Authors: Yingfang Yuan and Wenjun Wang and George M. Coghill and Wei Pang
- Abstract summary: This research presents a novel genetic algorithm with a hierarchical evaluation strategy (HESGA)
The proposed hierarchical strategy uses the fast evaluation in a lower level for recommending candidates to a higher level, where the full evaluation will act as a final assessor to maintain a group of elite individuals.
- Score: 7.139436410105177
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph representation of structured data can facilitate the extraction of
stereoscopic features, and it has demonstrated excellent ability when working
with deep learning systems, the so-called Graph Neural Networks (GNNs).
Choosing a promising architecture for constructing GNNs can be transferred to a
hyperparameter optimisation problem, a very challenging task due to the size of
the underlying search space and high computational cost for evaluating
candidate GNNs. To address this issue, this research presents a novel genetic
algorithm with a hierarchical evaluation strategy (HESGA), which combines the
full evaluation of GNNs with a fast evaluation approach. By using full
evaluation, a GNN is represented by a set of hyperparameter values and trained
on a specified dataset, and root mean square error (RMSE) will be used to
measure the quality of the GNN represented by the set of hyperparameter values
(for regression problems). While in the proposed fast evaluation process, the
training will be interrupted at an early stage, the difference of RMSE values
between the starting and interrupted epochs will be used as a fast score, which
implies the potential of the GNN being considered. To coordinate both types of
evaluations, the proposed hierarchical strategy uses the fast evaluation in a
lower level for recommending candidates to a higher level, where the full
evaluation will act as a final assessor to maintain a group of elite
individuals. To validate the effectiveness of HESGA, we apply it to optimise
two types of deep graph neural networks. The experimental results on three
benchmark datasets demonstrate its advantages compared to Bayesian
hyperparameter optimization.
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - How Expressive are Graph Neural Networks in Recommendation? [17.31401354442106]
Graph Neural Networks (GNNs) have demonstrated superior performance on various graph learning tasks, including recommendation.
Recent research has explored the expressiveness of GNNs in general, demonstrating that message passing GNNs are at most as powerful as the Weisfeiler-Lehman test.
We propose the topological closeness metric to evaluate GNNs' ability to capture the structural distance between nodes.
arXiv Detail & Related papers (2023-08-22T02:17:34Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Tackling Oversmoothing of GNNs with Contrastive Learning [35.88575306925201]
Graph neural networks (GNNs) integrate the comprehensive relation of graph data and representation learning capability.
Oversmoothing makes the final representations of nodes indiscriminative, thus deteriorating the node classification and link prediction performance.
We propose the Topology-guided Graph Contrastive Layer, named TGCL, which is the first de-oversmoothing method maintaining all three mentioned metrics.
arXiv Detail & Related papers (2021-10-26T15:56:16Z) - A Genetic Algorithm with Tree-structured Mutation for Hyperparameter
Optimisation of Graph Neural Networks [8.02401104726362]
Graph neural networks (GNNs) have gained increasing attention, as they possess excellent capability of processing graph-related problems.
In practice, hyperparameter optimisation (HPO) is critical for GNNs to achieve satisfactory results.
We propose a tree-structured mutation strategy for GA to alleviate this issue.
arXiv Detail & Related papers (2021-02-24T00:31:52Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.