A Genetic Algorithm with Tree-structured Mutation for Hyperparameter
Optimisation of Graph Neural Networks
- URL: http://arxiv.org/abs/2102.11995v1
- Date: Wed, 24 Feb 2021 00:31:52 GMT
- Title: A Genetic Algorithm with Tree-structured Mutation for Hyperparameter
Optimisation of Graph Neural Networks
- Authors: Yingfang Yuan, Wenjun Wang, Wei Pang
- Abstract summary: Graph neural networks (GNNs) have gained increasing attention, as they possess excellent capability of processing graph-related problems.
In practice, hyperparameter optimisation (HPO) is critical for GNNs to achieve satisfactory results.
We propose a tree-structured mutation strategy for GA to alleviate this issue.
- Score: 8.02401104726362
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, graph neural networks (GNNs) have gained increasing
attention, as they possess excellent capability of processing graph-related
problems. In practice, hyperparameter optimisation (HPO) is critical for GNNs
to achieve satisfactory results, but this process is costly because the
evaluations of different hyperparameter settings require excessively training
many GNNs. Many approaches have been proposed for HPO which aims to identify
promising hyperparameters efficiently. In particular, genetic algorithm (GA)
for HPO has been explored, which treats GNNs as a black-box model, of which
only the outputs can be observed given a set of hyperparameters. However,
because GNN models are extremely sophisticated and the evaluations of
hyperparameters on GNNs are expensive, GA requires advanced techniques to
balance the exploration and exploitation of the search and make the
optimisation more effective given limited computational resources. Therefore,
we proposed a tree-structured mutation strategy for GA to alleviate this issue.
Meanwhile, we reviewed the recent HPO works which gives the room to the idea of
tree-structure to develop, and we hope our approach can further improve these
HPO methods in the future.
Related papers
- Attentional Graph Neural Networks for Robust Massive Network
Localization [20.416879207269446]
Graph neural networks (GNNs) have emerged as a prominent tool for classification tasks in machine learning.
This paper integrates GNNs with attention mechanism to tackle a challenging nonlinear regression problem: network localization.
We first introduce a novel network localization method based on graph convolutional network (GCN), which exhibits exceptional precision even under severe non-line-of-sight (NLOS) conditions.
arXiv Detail & Related papers (2023-11-28T15:05:13Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - Which Hyperparameters to Optimise? An Investigation of Evolutionary
Hyperparameter Optimisation in Graph Neural Network For Molecular Property
Prediction [8.02401104726362]
The study of graph neural network (GNN) has attracted much attention and achieved promising performance in molecular property prediction.
We focus on the impact of selecting two types of GNN hyper parameters, those belonging to graph-related layers and those of task-specific layers, on the performance of GNN for molecular property prediction.
arXiv Detail & Related papers (2021-04-13T09:21:27Z) - Interpreting and Unifying Graph Neural Networks with An Optimization
Framework [47.44773358082203]
Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning.
In this paper, we establish a surprising connection between different propagation mechanisms with a unified optimization problem.
Our proposed unified optimization framework, summarizing the commonalities between several of the most representative GNNs, opens up new opportunities for flexibly designing new GNNs.
arXiv Detail & Related papers (2021-01-28T08:06:02Z) - A Novel Genetic Algorithm with Hierarchical Evaluation Strategy for
Hyperparameter Optimisation of Graph Neural Networks [7.139436410105177]
This research presents a novel genetic algorithm with a hierarchical evaluation strategy (HESGA)
The proposed hierarchical strategy uses the fast evaluation in a lower level for recommending candidates to a higher level, where the full evaluation will act as a final assessor to maintain a group of elite individuals.
arXiv Detail & Related papers (2021-01-22T19:19:59Z) - Genetic-algorithm-optimized neural networks for gravitational wave
classification [0.0]
We propose a new method for hyperparameter optimization based on genetic algorithms (GAs)
We show that the GA can discover high-quality architectures when the initial hyper parameter seed values are far from a good solution.
Using genetic algorithm optimization to refine an existing network should be especially useful if the problem context changes.
arXiv Detail & Related papers (2020-10-09T03:14:20Z) - A Study of Genetic Algorithms for Hyperparameter Optimization of Neural
Networks in Machine Translation [0.0]
We propose an automatic tuning method modeled after Darwin's Survival of the Fittest Theory via a Genetic Algorithm.
Research results show that the proposed method, a GA, outperforms a random selection of hyper parameters.
arXiv Detail & Related papers (2020-09-15T02:24:16Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z) - An Asymptotically Optimal Multi-Armed Bandit Algorithm and
Hyperparameter Optimization [48.5614138038673]
We propose an efficient and robust bandit-based algorithm called Sub-Sampling (SS) in the scenario of hyper parameter search evaluation.
We also develop a novel hyper parameter optimization algorithm called BOSS.
Empirical studies validate our theoretical arguments of SS and demonstrate the superior performance of BOSS on a number of applications.
arXiv Detail & Related papers (2020-07-11T03:15:21Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.